Test Report: KVM_Linux_crio 17907

                    
                      7ea9a0daea14a922bd9e219098252b67b1b782a8:2024-01-08:32610
                    
                

Test fail (27/298)

Order failed test Duration
35 TestAddons/parallel/Ingress 156.01
49 TestAddons/StoppedEnableDisable 155.22
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 17.19
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 175.01
213 TestMultiNode/serial/PingHostFrom2Pods 3.35
220 TestMultiNode/serial/RestartKeepsNodes 708.26
222 TestMultiNode/serial/StopMultiNode 143.39
229 TestPreload 336.09
235 TestRunningBinaryUpgrade 194.13
270 TestStartStop/group/old-k8s-version/serial/Stop 141.2
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.44
278 TestPause/serial/SecondStartNoReconfiguration 317.02
281 TestStartStop/group/no-preload/serial/Stop 139.37
282 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
290 TestStartStop/group/embed-certs/serial/Stop 139.49
293 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.79
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
298 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 468.18
299 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 508.53
300 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.3
301 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.41
306 TestStartStop/group/newest-cni/serial/Stop 140.43
307 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 12.41
309 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 188.71
310 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 135.89
316 TestStoppedBinaryUpgrade/Upgrade 283.8
x
+
TestAddons/parallel/Ingress (156.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-117367 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-117367 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-117367 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [05f824c9-ec7d-412c-a674-1f893cffb657] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [05f824c9-ec7d-412c-a674-1f893cffb657] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.00576394s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-117367 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.353420841s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-117367 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.205
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-117367 addons disable ingress-dns --alsologtostderr -v=1: (1.135271372s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-117367 addons disable ingress --alsologtostderr -v=1: (7.843609697s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-117367 -n addons-117367
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-117367 logs -n 25: (1.532383997s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-761857 | jenkins | v1.32.0 | 08 Jan 24 20:11 UTC |                     |
	|         | -p download-only-761857                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 08 Jan 24 20:11 UTC | 08 Jan 24 20:11 UTC |
	| delete  | -p download-only-761857                                                                     | download-only-761857 | jenkins | v1.32.0 | 08 Jan 24 20:11 UTC | 08 Jan 24 20:11 UTC |
	| delete  | -p download-only-761857                                                                     | download-only-761857 | jenkins | v1.32.0 | 08 Jan 24 20:11 UTC | 08 Jan 24 20:11 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-115177 | jenkins | v1.32.0 | 08 Jan 24 20:11 UTC |                     |
	|         | binary-mirror-115177                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42657                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-115177                                                                     | binary-mirror-115177 | jenkins | v1.32.0 | 08 Jan 24 20:11 UTC | 08 Jan 24 20:11 UTC |
	| addons  | disable dashboard -p                                                                        | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:11 UTC |                     |
	|         | addons-117367                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:11 UTC |                     |
	|         | addons-117367                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-117367 --wait=true                                                                | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:11 UTC | 08 Jan 24 20:15 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:15 UTC | 08 Jan 24 20:15 UTC |
	|         | -p addons-117367                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-117367 ssh cat                                                                       | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:15 UTC | 08 Jan 24 20:15 UTC |
	|         | /opt/local-path-provisioner/pvc-c8a6c247-5d06-4b89-8f77-d084297eda51_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-117367 addons disable                                                                | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:15 UTC | 08 Jan 24 20:16 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-117367 ip                                                                            | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:15 UTC | 08 Jan 24 20:15 UTC |
	| addons  | addons-117367 addons disable                                                                | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:15 UTC | 08 Jan 24 20:15 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:15 UTC | 08 Jan 24 20:15 UTC |
	|         | addons-117367                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:15 UTC | 08 Jan 24 20:15 UTC |
	|         | -p addons-117367                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-117367 addons                                                                        | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-117367 addons disable                                                                | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | addons-117367                                                                               |                      |         |         |                     |                     |
	| addons  | addons-117367 addons                                                                        | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-117367 addons                                                                        | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC | 08 Jan 24 20:16 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-117367 ssh curl -s                                                                   | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:16 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-117367 ip                                                                            | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	| addons  | addons-117367 addons disable                                                                | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:18 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-117367 addons disable                                                                | addons-117367        | jenkins | v1.32.0 | 08 Jan 24 20:18 UTC | 08 Jan 24 20:19 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:11:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:11:58.287611   18589 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:11:58.287864   18589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:11:58.287874   18589 out.go:309] Setting ErrFile to fd 2...
	I0108 20:11:58.287878   18589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:11:58.288050   18589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 20:11:58.288678   18589 out.go:303] Setting JSON to false
	I0108 20:11:58.289460   18589 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3242,"bootTime":1704741476,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:11:58.289518   18589 start.go:138] virtualization: kvm guest
	I0108 20:11:58.291930   18589 out.go:177] * [addons-117367] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:11:58.293601   18589 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:11:58.293637   18589 notify.go:220] Checking for updates...
	I0108 20:11:58.295126   18589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:11:58.296611   18589 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:11:58.298188   18589 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:11:58.299974   18589 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:11:58.301576   18589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:11:58.303289   18589 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:11:58.336281   18589 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 20:11:58.337693   18589 start.go:298] selected driver: kvm2
	I0108 20:11:58.337711   18589 start.go:902] validating driver "kvm2" against <nil>
	I0108 20:11:58.337722   18589 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:11:58.338428   18589 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:11:58.338505   18589 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 20:11:58.352789   18589 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 20:11:58.352859   18589 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:11:58.353073   18589 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:11:58.353122   18589 cni.go:84] Creating CNI manager for ""
	I0108 20:11:58.353137   18589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 20:11:58.353145   18589 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 20:11:58.353156   18589 start_flags.go:323] config:
	{Name:addons-117367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-117367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:11:58.353284   18589 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:11:58.355552   18589 out.go:177] * Starting control plane node addons-117367 in cluster addons-117367
	I0108 20:11:58.357372   18589 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:11:58.357418   18589 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 20:11:58.357429   18589 cache.go:56] Caching tarball of preloaded images
	I0108 20:11:58.357516   18589 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 20:11:58.357529   18589 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:11:58.357851   18589 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/config.json ...
	I0108 20:11:58.357878   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/config.json: {Name:mke5e8fc3cb8a9c1e5588db1460af45c1a90061f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:11:58.358006   18589 start.go:365] acquiring machines lock for addons-117367: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 20:11:58.358048   18589 start.go:369] acquired machines lock for "addons-117367" in 29.996µs
	I0108 20:11:58.358064   18589 start.go:93] Provisioning new machine with config: &{Name:addons-117367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-117367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:11:58.358122   18589 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 20:11:58.359999   18589 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0108 20:11:58.360154   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:11:58.360192   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:11:58.374033   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38493
	I0108 20:11:58.374470   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:11:58.374988   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:11:58.375014   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:11:58.375371   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:11:58.375573   18589 main.go:141] libmachine: (addons-117367) Calling .GetMachineName
	I0108 20:11:58.375732   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:11:58.375903   18589 start.go:159] libmachine.API.Create for "addons-117367" (driver="kvm2")
	I0108 20:11:58.375936   18589 client.go:168] LocalClient.Create starting
	I0108 20:11:58.375982   18589 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem
	I0108 20:11:58.513156   18589 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem
	I0108 20:11:58.805713   18589 main.go:141] libmachine: Running pre-create checks...
	I0108 20:11:58.805739   18589 main.go:141] libmachine: (addons-117367) Calling .PreCreateCheck
	I0108 20:11:58.806267   18589 main.go:141] libmachine: (addons-117367) Calling .GetConfigRaw
	I0108 20:11:58.806689   18589 main.go:141] libmachine: Creating machine...
	I0108 20:11:58.806703   18589 main.go:141] libmachine: (addons-117367) Calling .Create
	I0108 20:11:58.806845   18589 main.go:141] libmachine: (addons-117367) Creating KVM machine...
	I0108 20:11:58.808105   18589 main.go:141] libmachine: (addons-117367) DBG | found existing default KVM network
	I0108 20:11:58.808835   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:11:58.808684   18611 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I0108 20:11:58.814771   18589 main.go:141] libmachine: (addons-117367) DBG | trying to create private KVM network mk-addons-117367 192.168.39.0/24...
	I0108 20:11:58.883638   18589 main.go:141] libmachine: (addons-117367) DBG | private KVM network mk-addons-117367 192.168.39.0/24 created
	I0108 20:11:58.883673   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:11:58.883594   18611 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:11:58.883690   18589 main.go:141] libmachine: (addons-117367) Setting up store path in /home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367 ...
	I0108 20:11:58.883709   18589 main.go:141] libmachine: (addons-117367) Building disk image from file:///home/jenkins/minikube-integration/17907-10702/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 20:11:58.883792   18589 main.go:141] libmachine: (addons-117367) Downloading /home/jenkins/minikube-integration/17907-10702/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17907-10702/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 20:11:59.115984   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:11:59.115867   18611 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa...
	I0108 20:11:59.250792   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:11:59.250641   18611 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/addons-117367.rawdisk...
	I0108 20:11:59.250839   18589 main.go:141] libmachine: (addons-117367) DBG | Writing magic tar header
	I0108 20:11:59.250856   18589 main.go:141] libmachine: (addons-117367) DBG | Writing SSH key tar header
	I0108 20:11:59.250870   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:11:59.250755   18611 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367 ...
	I0108 20:11:59.250907   18589 main.go:141] libmachine: (addons-117367) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367 (perms=drwx------)
	I0108 20:11:59.250936   18589 main.go:141] libmachine: (addons-117367) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367
	I0108 20:11:59.250953   18589 main.go:141] libmachine: (addons-117367) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube/machines (perms=drwxr-xr-x)
	I0108 20:11:59.250984   18589 main.go:141] libmachine: (addons-117367) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube/machines
	I0108 20:11:59.251005   18589 main.go:141] libmachine: (addons-117367) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube (perms=drwxr-xr-x)
	I0108 20:11:59.251013   18589 main.go:141] libmachine: (addons-117367) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:11:59.251022   18589 main.go:141] libmachine: (addons-117367) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702
	I0108 20:11:59.251032   18589 main.go:141] libmachine: (addons-117367) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 20:11:59.251043   18589 main.go:141] libmachine: (addons-117367) DBG | Checking permissions on dir: /home/jenkins
	I0108 20:11:59.251058   18589 main.go:141] libmachine: (addons-117367) DBG | Checking permissions on dir: /home
	I0108 20:11:59.251075   18589 main.go:141] libmachine: (addons-117367) DBG | Skipping /home - not owner
	I0108 20:11:59.251097   18589 main.go:141] libmachine: (addons-117367) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702 (perms=drwxrwxr-x)
	I0108 20:11:59.251110   18589 main.go:141] libmachine: (addons-117367) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 20:11:59.251126   18589 main.go:141] libmachine: (addons-117367) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 20:11:59.251139   18589 main.go:141] libmachine: (addons-117367) Creating domain...
	I0108 20:11:59.251995   18589 main.go:141] libmachine: (addons-117367) define libvirt domain using xml: 
	I0108 20:11:59.252011   18589 main.go:141] libmachine: (addons-117367) <domain type='kvm'>
	I0108 20:11:59.252030   18589 main.go:141] libmachine: (addons-117367)   <name>addons-117367</name>
	I0108 20:11:59.252040   18589 main.go:141] libmachine: (addons-117367)   <memory unit='MiB'>4000</memory>
	I0108 20:11:59.252049   18589 main.go:141] libmachine: (addons-117367)   <vcpu>2</vcpu>
	I0108 20:11:59.252068   18589 main.go:141] libmachine: (addons-117367)   <features>
	I0108 20:11:59.252083   18589 main.go:141] libmachine: (addons-117367)     <acpi/>
	I0108 20:11:59.252118   18589 main.go:141] libmachine: (addons-117367)     <apic/>
	I0108 20:11:59.252132   18589 main.go:141] libmachine: (addons-117367)     <pae/>
	I0108 20:11:59.252141   18589 main.go:141] libmachine: (addons-117367)     
	I0108 20:11:59.252155   18589 main.go:141] libmachine: (addons-117367)   </features>
	I0108 20:11:59.252168   18589 main.go:141] libmachine: (addons-117367)   <cpu mode='host-passthrough'>
	I0108 20:11:59.252181   18589 main.go:141] libmachine: (addons-117367)   
	I0108 20:11:59.252197   18589 main.go:141] libmachine: (addons-117367)   </cpu>
	I0108 20:11:59.252210   18589 main.go:141] libmachine: (addons-117367)   <os>
	I0108 20:11:59.252223   18589 main.go:141] libmachine: (addons-117367)     <type>hvm</type>
	I0108 20:11:59.252241   18589 main.go:141] libmachine: (addons-117367)     <boot dev='cdrom'/>
	I0108 20:11:59.252254   18589 main.go:141] libmachine: (addons-117367)     <boot dev='hd'/>
	I0108 20:11:59.252268   18589 main.go:141] libmachine: (addons-117367)     <bootmenu enable='no'/>
	I0108 20:11:59.252286   18589 main.go:141] libmachine: (addons-117367)   </os>
	I0108 20:11:59.252298   18589 main.go:141] libmachine: (addons-117367)   <devices>
	I0108 20:11:59.252309   18589 main.go:141] libmachine: (addons-117367)     <disk type='file' device='cdrom'>
	I0108 20:11:59.252328   18589 main.go:141] libmachine: (addons-117367)       <source file='/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/boot2docker.iso'/>
	I0108 20:11:59.252350   18589 main.go:141] libmachine: (addons-117367)       <target dev='hdc' bus='scsi'/>
	I0108 20:11:59.252364   18589 main.go:141] libmachine: (addons-117367)       <readonly/>
	I0108 20:11:59.252390   18589 main.go:141] libmachine: (addons-117367)     </disk>
	I0108 20:11:59.252409   18589 main.go:141] libmachine: (addons-117367)     <disk type='file' device='disk'>
	I0108 20:11:59.252419   18589 main.go:141] libmachine: (addons-117367)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 20:11:59.252455   18589 main.go:141] libmachine: (addons-117367)       <source file='/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/addons-117367.rawdisk'/>
	I0108 20:11:59.252473   18589 main.go:141] libmachine: (addons-117367)       <target dev='hda' bus='virtio'/>
	I0108 20:11:59.252482   18589 main.go:141] libmachine: (addons-117367)     </disk>
	I0108 20:11:59.252496   18589 main.go:141] libmachine: (addons-117367)     <interface type='network'>
	I0108 20:11:59.252506   18589 main.go:141] libmachine: (addons-117367)       <source network='mk-addons-117367'/>
	I0108 20:11:59.252519   18589 main.go:141] libmachine: (addons-117367)       <model type='virtio'/>
	I0108 20:11:59.252536   18589 main.go:141] libmachine: (addons-117367)     </interface>
	I0108 20:11:59.252551   18589 main.go:141] libmachine: (addons-117367)     <interface type='network'>
	I0108 20:11:59.252563   18589 main.go:141] libmachine: (addons-117367)       <source network='default'/>
	I0108 20:11:59.252585   18589 main.go:141] libmachine: (addons-117367)       <model type='virtio'/>
	I0108 20:11:59.252628   18589 main.go:141] libmachine: (addons-117367)     </interface>
	I0108 20:11:59.252647   18589 main.go:141] libmachine: (addons-117367)     <serial type='pty'>
	I0108 20:11:59.252660   18589 main.go:141] libmachine: (addons-117367)       <target port='0'/>
	I0108 20:11:59.252673   18589 main.go:141] libmachine: (addons-117367)     </serial>
	I0108 20:11:59.252684   18589 main.go:141] libmachine: (addons-117367)     <console type='pty'>
	I0108 20:11:59.252702   18589 main.go:141] libmachine: (addons-117367)       <target type='serial' port='0'/>
	I0108 20:11:59.252721   18589 main.go:141] libmachine: (addons-117367)     </console>
	I0108 20:11:59.252736   18589 main.go:141] libmachine: (addons-117367)     <rng model='virtio'>
	I0108 20:11:59.252750   18589 main.go:141] libmachine: (addons-117367)       <backend model='random'>/dev/random</backend>
	I0108 20:11:59.252764   18589 main.go:141] libmachine: (addons-117367)     </rng>
	I0108 20:11:59.252775   18589 main.go:141] libmachine: (addons-117367)     
	I0108 20:11:59.252785   18589 main.go:141] libmachine: (addons-117367)     
	I0108 20:11:59.252805   18589 main.go:141] libmachine: (addons-117367)   </devices>
	I0108 20:11:59.252818   18589 main.go:141] libmachine: (addons-117367) </domain>
	I0108 20:11:59.252830   18589 main.go:141] libmachine: (addons-117367) 
	I0108 20:11:59.258829   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:ef:f7:d6 in network default
	I0108 20:11:59.259375   18589 main.go:141] libmachine: (addons-117367) Ensuring networks are active...
	I0108 20:11:59.259392   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:11:59.260021   18589 main.go:141] libmachine: (addons-117367) Ensuring network default is active
	I0108 20:11:59.260301   18589 main.go:141] libmachine: (addons-117367) Ensuring network mk-addons-117367 is active
	I0108 20:11:59.261828   18589 main.go:141] libmachine: (addons-117367) Getting domain xml...
	I0108 20:11:59.262508   18589 main.go:141] libmachine: (addons-117367) Creating domain...
	I0108 20:12:00.549756   18589 main.go:141] libmachine: (addons-117367) Waiting to get IP...
	I0108 20:12:00.550481   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:00.550799   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:00.550816   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:00.550788   18611 retry.go:31] will retry after 204.140335ms: waiting for machine to come up
	I0108 20:12:00.756325   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:00.756837   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:00.756879   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:00.756763   18611 retry.go:31] will retry after 309.195671ms: waiting for machine to come up
	I0108 20:12:01.067220   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:01.067645   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:01.067676   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:01.067588   18611 retry.go:31] will retry after 295.502176ms: waiting for machine to come up
	I0108 20:12:01.365210   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:01.365712   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:01.365764   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:01.365702   18611 retry.go:31] will retry after 436.074485ms: waiting for machine to come up
	I0108 20:12:01.803362   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:01.803940   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:01.803970   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:01.803875   18611 retry.go:31] will retry after 547.268634ms: waiting for machine to come up
	I0108 20:12:02.352733   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:02.353356   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:02.353378   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:02.353287   18611 retry.go:31] will retry after 830.920301ms: waiting for machine to come up
	I0108 20:12:03.186913   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:03.187396   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:03.187426   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:03.187348   18611 retry.go:31] will retry after 731.180837ms: waiting for machine to come up
	I0108 20:12:03.920182   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:03.920697   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:03.920728   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:03.920636   18611 retry.go:31] will retry after 1.038484443s: waiting for machine to come up
	I0108 20:12:04.960960   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:04.961454   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:04.961489   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:04.961390   18611 retry.go:31] will retry after 1.825444536s: waiting for machine to come up
	I0108 20:12:06.788248   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:06.788650   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:06.788684   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:06.788599   18611 retry.go:31] will retry after 1.681118878s: waiting for machine to come up
	I0108 20:12:08.471438   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:08.471892   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:08.471925   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:08.471812   18611 retry.go:31] will retry after 2.454513777s: waiting for machine to come up
	I0108 20:12:10.929858   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:10.930339   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:10.930362   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:10.930272   18611 retry.go:31] will retry after 3.181445627s: waiting for machine to come up
	I0108 20:12:14.112948   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:14.113378   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:14.113407   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:14.113337   18611 retry.go:31] will retry after 3.929474395s: waiting for machine to come up
	I0108 20:12:18.047299   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:18.047709   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find current IP address of domain addons-117367 in network mk-addons-117367
	I0108 20:12:18.047737   18589 main.go:141] libmachine: (addons-117367) DBG | I0108 20:12:18.047667   18611 retry.go:31] will retry after 3.91364264s: waiting for machine to come up
	I0108 20:12:21.965904   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:21.966371   18589 main.go:141] libmachine: (addons-117367) Found IP for machine: 192.168.39.205
	I0108 20:12:21.966400   18589 main.go:141] libmachine: (addons-117367) Reserving static IP address...
	I0108 20:12:21.966413   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has current primary IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:21.966856   18589 main.go:141] libmachine: (addons-117367) DBG | unable to find host DHCP lease matching {name: "addons-117367", mac: "52:54:00:12:96:f3", ip: "192.168.39.205"} in network mk-addons-117367
	I0108 20:12:22.041349   18589 main.go:141] libmachine: (addons-117367) DBG | Getting to WaitForSSH function...
	I0108 20:12:22.041382   18589 main.go:141] libmachine: (addons-117367) Reserved static IP address: 192.168.39.205
	I0108 20:12:22.041399   18589 main.go:141] libmachine: (addons-117367) Waiting for SSH to be available...
	I0108 20:12:22.044486   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.045043   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:minikube Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:22.045077   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.045211   18589 main.go:141] libmachine: (addons-117367) DBG | Using SSH client type: external
	I0108 20:12:22.045243   18589 main.go:141] libmachine: (addons-117367) DBG | Using SSH private key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa (-rw-------)
	I0108 20:12:22.045322   18589 main.go:141] libmachine: (addons-117367) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 20:12:22.045342   18589 main.go:141] libmachine: (addons-117367) DBG | About to run SSH command:
	I0108 20:12:22.045356   18589 main.go:141] libmachine: (addons-117367) DBG | exit 0
	I0108 20:12:22.179997   18589 main.go:141] libmachine: (addons-117367) DBG | SSH cmd err, output: <nil>: 
	I0108 20:12:22.180343   18589 main.go:141] libmachine: (addons-117367) KVM machine creation complete!
	I0108 20:12:22.180629   18589 main.go:141] libmachine: (addons-117367) Calling .GetConfigRaw
	I0108 20:12:22.181225   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:22.181407   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:22.181588   18589 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 20:12:22.181602   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:22.182826   18589 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 20:12:22.182841   18589 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 20:12:22.182848   18589 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 20:12:22.182854   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:22.185027   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.185295   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:22.185330   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.185457   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:22.185650   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:22.185824   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:22.185961   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:22.186136   18589 main.go:141] libmachine: Using SSH client type: native
	I0108 20:12:22.186496   18589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0108 20:12:22.186513   18589 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 20:12:22.299836   18589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:12:22.299870   18589 main.go:141] libmachine: Detecting the provisioner...
	I0108 20:12:22.299892   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:22.302611   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.303070   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:22.303101   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.303361   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:22.303559   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:22.303830   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:22.303988   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:22.304181   18589 main.go:141] libmachine: Using SSH client type: native
	I0108 20:12:22.304556   18589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0108 20:12:22.304574   18589 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 20:12:22.416886   18589 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 20:12:22.417003   18589 main.go:141] libmachine: found compatible host: buildroot
	I0108 20:12:22.417022   18589 main.go:141] libmachine: Provisioning with buildroot...
	I0108 20:12:22.417034   18589 main.go:141] libmachine: (addons-117367) Calling .GetMachineName
	I0108 20:12:22.417317   18589 buildroot.go:166] provisioning hostname "addons-117367"
	I0108 20:12:22.417340   18589 main.go:141] libmachine: (addons-117367) Calling .GetMachineName
	I0108 20:12:22.417509   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:22.419825   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.420246   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:22.420281   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.420471   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:22.420640   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:22.420823   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:22.420927   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:22.421075   18589 main.go:141] libmachine: Using SSH client type: native
	I0108 20:12:22.421441   18589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0108 20:12:22.421456   18589 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-117367 && echo "addons-117367" | sudo tee /etc/hostname
	I0108 20:12:22.549772   18589 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-117367
	
	I0108 20:12:22.549800   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:22.552478   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.552941   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:22.552964   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.553151   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:22.553343   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:22.553539   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:22.553704   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:22.553910   18589 main.go:141] libmachine: Using SSH client type: native
	I0108 20:12:22.554363   18589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0108 20:12:22.554391   18589 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-117367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-117367/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-117367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:12:22.672383   18589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:12:22.672416   18589 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 20:12:22.672440   18589 buildroot.go:174] setting up certificates
	I0108 20:12:22.672455   18589 provision.go:83] configureAuth start
	I0108 20:12:22.672467   18589 main.go:141] libmachine: (addons-117367) Calling .GetMachineName
	I0108 20:12:22.672785   18589 main.go:141] libmachine: (addons-117367) Calling .GetIP
	I0108 20:12:22.675534   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.675817   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:22.675843   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.676022   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:22.678379   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.678731   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:22.678761   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.678863   18589 provision.go:138] copyHostCerts
	I0108 20:12:22.678931   18589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 20:12:22.679046   18589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 20:12:22.679116   18589 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 20:12:22.679178   18589 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.addons-117367 san=[192.168.39.205 192.168.39.205 localhost 127.0.0.1 minikube addons-117367]
	I0108 20:12:22.876561   18589 provision.go:172] copyRemoteCerts
	I0108 20:12:22.876616   18589 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:12:22.876638   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:22.879274   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.879577   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:22.879605   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:22.879769   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:22.879951   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:22.880120   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:22.880245   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:22.965802   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:12:22.990074   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:12:23.013758   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 20:12:23.037203   18589 provision.go:86] duration metric: configureAuth took 364.734884ms
	I0108 20:12:23.037231   18589 buildroot.go:189] setting minikube options for container-runtime
	I0108 20:12:23.037409   18589 config.go:182] Loaded profile config "addons-117367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:12:23.037484   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:23.040061   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.040535   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:23.040569   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.040755   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:23.040945   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:23.041111   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:23.041301   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:23.041486   18589 main.go:141] libmachine: Using SSH client type: native
	I0108 20:12:23.041811   18589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0108 20:12:23.041827   18589 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:12:23.361390   18589 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:12:23.361425   18589 main.go:141] libmachine: Checking connection to Docker...
	I0108 20:12:23.361460   18589 main.go:141] libmachine: (addons-117367) Calling .GetURL
	I0108 20:12:23.362714   18589 main.go:141] libmachine: (addons-117367) DBG | Using libvirt version 6000000
	I0108 20:12:23.365502   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.365916   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:23.365957   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.366162   18589 main.go:141] libmachine: Docker is up and running!
	I0108 20:12:23.366184   18589 main.go:141] libmachine: Reticulating splines...
	I0108 20:12:23.366191   18589 client.go:171] LocalClient.Create took 24.990244321s
	I0108 20:12:23.366211   18589 start.go:167] duration metric: libmachine.API.Create for "addons-117367" took 24.990310911s
	I0108 20:12:23.366225   18589 start.go:300] post-start starting for "addons-117367" (driver="kvm2")
	I0108 20:12:23.366235   18589 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:12:23.366260   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:23.366602   18589 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:12:23.366632   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:23.368770   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.369072   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:23.369095   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.369290   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:23.369479   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:23.369641   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:23.369759   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:23.454616   18589 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:12:23.459130   18589 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 20:12:23.459173   18589 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 20:12:23.459351   18589 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 20:12:23.459407   18589 start.go:303] post-start completed in 93.176227ms
	I0108 20:12:23.459482   18589 main.go:141] libmachine: (addons-117367) Calling .GetConfigRaw
	I0108 20:12:23.460556   18589 main.go:141] libmachine: (addons-117367) Calling .GetIP
	I0108 20:12:23.464227   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.464542   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:23.464564   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.464763   18589 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/config.json ...
	I0108 20:12:23.464920   18589 start.go:128] duration metric: createHost completed in 25.10678875s
	I0108 20:12:23.464940   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:23.467210   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.467556   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:23.467575   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.467735   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:23.467924   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:23.468058   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:23.468213   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:23.468356   18589 main.go:141] libmachine: Using SSH client type: native
	I0108 20:12:23.468683   18589 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0108 20:12:23.468698   18589 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 20:12:23.581045   18589 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704744743.560743452
	
	I0108 20:12:23.581065   18589 fix.go:206] guest clock: 1704744743.560743452
	I0108 20:12:23.581072   18589 fix.go:219] Guest: 2024-01-08 20:12:23.560743452 +0000 UTC Remote: 2024-01-08 20:12:23.464930546 +0000 UTC m=+25.224764007 (delta=95.812906ms)
	I0108 20:12:23.581107   18589 fix.go:190] guest clock delta is within tolerance: 95.812906ms
	I0108 20:12:23.581118   18589 start.go:83] releasing machines lock for "addons-117367", held for 25.223061436s
	I0108 20:12:23.581145   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:23.581421   18589 main.go:141] libmachine: (addons-117367) Calling .GetIP
	I0108 20:12:23.583798   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.584125   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:23.584153   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.584336   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:23.584864   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:23.585081   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:23.585189   18589 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:12:23.585221   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:23.585339   18589 ssh_runner.go:195] Run: cat /version.json
	I0108 20:12:23.585365   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:23.587811   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.587983   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.588169   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:23.588199   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.588313   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:23.588424   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:23.588449   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:23.588482   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:23.588626   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:23.588641   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:23.588759   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:23.588932   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:23.588923   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:23.589060   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:23.695578   18589 ssh_runner.go:195] Run: systemctl --version
	I0108 20:12:23.701584   18589 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:12:23.874283   18589 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 20:12:23.880754   18589 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 20:12:23.880815   18589 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:12:23.896943   18589 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 20:12:23.896969   18589 start.go:475] detecting cgroup driver to use...
	I0108 20:12:23.897037   18589 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:12:23.916640   18589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:12:23.929030   18589 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:12:23.929096   18589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:12:23.941960   18589 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:12:23.954879   18589 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:12:24.057940   18589 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:12:24.182512   18589 docker.go:233] disabling docker service ...
	I0108 20:12:24.182605   18589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:12:24.195866   18589 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:12:24.207399   18589 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:12:24.323693   18589 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:12:24.438576   18589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:12:24.450993   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:12:24.469470   18589 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:12:24.469529   18589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:12:24.478872   18589 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:12:24.478937   18589 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:12:24.488358   18589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:12:24.498531   18589 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:12:24.507949   18589 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:12:24.517350   18589 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:12:24.525243   18589 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 20:12:24.525298   18589 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 20:12:24.536896   18589 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:12:24.546500   18589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:12:24.662258   18589 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:12:24.853861   18589 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:12:24.853948   18589 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:12:24.860074   18589 start.go:543] Will wait 60s for crictl version
	I0108 20:12:24.860160   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:12:24.864265   18589 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:12:24.902591   18589 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 20:12:24.902710   18589 ssh_runner.go:195] Run: crio --version
	I0108 20:12:24.956902   18589 ssh_runner.go:195] Run: crio --version
	I0108 20:12:25.006393   18589 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 20:12:25.007866   18589 main.go:141] libmachine: (addons-117367) Calling .GetIP
	I0108 20:12:25.010534   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:25.010860   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:25.010883   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:25.011087   18589 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 20:12:25.015367   18589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:12:25.029091   18589 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:12:25.029158   18589 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:12:25.064450   18589 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 20:12:25.064522   18589 ssh_runner.go:195] Run: which lz4
	I0108 20:12:25.068566   18589 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 20:12:25.072952   18589 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 20:12:25.072987   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 20:12:26.809394   18589 crio.go:444] Took 1.740857 seconds to copy over tarball
	I0108 20:12:26.809482   18589 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 20:12:29.927199   18589 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.117687769s)
	I0108 20:12:29.927230   18589 crio.go:451] Took 3.117813 seconds to extract the tarball
	I0108 20:12:29.927239   18589 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 20:12:29.969219   18589 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:12:30.045067   18589 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 20:12:30.045088   18589 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:12:30.045146   18589 ssh_runner.go:195] Run: crio config
	I0108 20:12:30.105854   18589 cni.go:84] Creating CNI manager for ""
	I0108 20:12:30.105883   18589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 20:12:30.105904   18589 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:12:30.105928   18589 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-117367 NodeName:addons-117367 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:12:30.106094   18589 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-117367"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.205
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:12:30.106192   18589 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-117367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-117367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:12:30.106268   18589 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:12:30.115668   18589 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:12:30.115726   18589 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:12:30.124603   18589 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0108 20:12:30.141685   18589 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:12:30.159450   18589 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0108 20:12:30.177145   18589 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I0108 20:12:30.181433   18589 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:12:30.195177   18589 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367 for IP: 192.168.39.205
	I0108 20:12:30.195218   18589 certs.go:190] acquiring lock for shared ca certs: {Name:mke01aa9d73e320a9a3907677cf29c75f0fa86d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:30.195367   18589 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key
	I0108 20:12:30.487694   18589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt ...
	I0108 20:12:30.487731   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt: {Name:mk750cb9478ea116cdbe229ee8c3f86a84a7df0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:30.487923   18589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key ...
	I0108 20:12:30.487935   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key: {Name:mk217bbfa67f27059e52f087c17dabf5222c888a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:30.488004   18589 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key
	I0108 20:12:30.730551   18589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt ...
	I0108 20:12:30.730583   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt: {Name:mk3ddbddc209ba45196d6b3ff245fbe8eebc6d71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:30.730751   18589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key ...
	I0108 20:12:30.730766   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key: {Name:mk67ba196bc3dd22e21d84f8b3b10658b241267c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:30.730868   18589 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.key
	I0108 20:12:30.730898   18589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt with IP's: []
	I0108 20:12:30.826603   18589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt ...
	I0108 20:12:30.826632   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: {Name:mk21a9b9baa3d709e1c31f1aff07b792c305d058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:30.826783   18589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.key ...
	I0108 20:12:30.826795   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.key: {Name:mk2baa71e6bf42f3bc6874f50ac7908c53d9e9ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:30.826865   18589 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.key.358d92cb
	I0108 20:12:30.826882   18589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.crt.358d92cb with IP's: [192.168.39.205 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:12:31.047810   18589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.crt.358d92cb ...
	I0108 20:12:31.047842   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.crt.358d92cb: {Name:mk2a60823a557075675daa8049d89d7694c66975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:31.047991   18589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.key.358d92cb ...
	I0108 20:12:31.048005   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.key.358d92cb: {Name:mk98048bfd263b23e45cb72cf8123cbefa676ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:31.048074   18589 certs.go:337] copying /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.crt.358d92cb -> /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.crt
	I0108 20:12:31.048175   18589 certs.go:341] copying /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.key.358d92cb -> /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.key
	I0108 20:12:31.048224   18589 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/proxy-client.key
	I0108 20:12:31.048241   18589 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/proxy-client.crt with IP's: []
	I0108 20:12:31.123261   18589 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/proxy-client.crt ...
	I0108 20:12:31.123288   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/proxy-client.crt: {Name:mked75a108143dfd0c98a7122adec0a776e8c101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:31.123439   18589 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/proxy-client.key ...
	I0108 20:12:31.123449   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/proxy-client.key: {Name:mke8e58a4275095dc86302c3230f8ecb931f3472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:31.123604   18589 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:12:31.123636   18589 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem (1082 bytes)
	I0108 20:12:31.123660   18589 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:12:31.123683   18589 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem (1675 bytes)
	I0108 20:12:31.124319   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:12:31.149666   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:12:31.174177   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:12:31.198622   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 20:12:31.223122   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:12:31.249157   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:12:31.273719   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:12:31.298186   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:12:31.323154   18589 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:12:31.348584   18589 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:12:31.366058   18589 ssh_runner.go:195] Run: openssl version
	I0108 20:12:31.372309   18589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:12:31.383144   18589 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:12:31.388772   18589 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:12:31.388840   18589 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:12:31.395122   18589 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:12:31.405833   18589 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:12:31.410574   18589 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:12:31.410627   18589 kubeadm.go:404] StartCluster: {Name:addons-117367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-117367 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:12:31.410698   18589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 20:12:31.410769   18589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:12:31.448984   18589 cri.go:89] found id: ""
	I0108 20:12:31.449048   18589 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:12:31.458471   18589 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:12:31.467453   18589 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:12:31.478567   18589 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:12:31.478608   18589 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 20:12:31.675922   18589 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:12:43.784148   18589 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 20:12:43.784222   18589 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:12:43.784314   18589 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:12:43.784399   18589 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:12:43.784544   18589 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:12:43.784646   18589 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:12:43.786344   18589 out.go:204]   - Generating certificates and keys ...
	I0108 20:12:43.786442   18589 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:12:43.786528   18589 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:12:43.786644   18589 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:12:43.786741   18589 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:12:43.786832   18589 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:12:43.786914   18589 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:12:43.786988   18589 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:12:43.787156   18589 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-117367 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0108 20:12:43.787231   18589 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:12:43.787366   18589 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-117367 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0108 20:12:43.787439   18589 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:12:43.787508   18589 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:12:43.787570   18589 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:12:43.787653   18589 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:12:43.787718   18589 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:12:43.787800   18589 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:12:43.787897   18589 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:12:43.787980   18589 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:12:43.788100   18589 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:12:43.788184   18589 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:12:43.789889   18589 out.go:204]   - Booting up control plane ...
	I0108 20:12:43.789998   18589 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:12:43.790107   18589 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:12:43.790188   18589 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:12:43.790319   18589 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:12:43.790447   18589 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:12:43.790518   18589 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:12:43.790668   18589 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:12:43.790739   18589 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003682 seconds
	I0108 20:12:43.790817   18589 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:12:43.790911   18589 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:12:43.790955   18589 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:12:43.791122   18589 kubeadm.go:322] [mark-control-plane] Marking the node addons-117367 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 20:12:43.791198   18589 kubeadm.go:322] [bootstrap-token] Using token: tgqiiv.1yx1hw794hsivrh3
	I0108 20:12:43.792641   18589 out.go:204]   - Configuring RBAC rules ...
	I0108 20:12:43.792769   18589 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:12:43.792862   18589 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:12:43.793004   18589 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:12:43.793135   18589 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:12:43.793290   18589 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:12:43.793407   18589 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:12:43.793568   18589 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:12:43.793625   18589 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:12:43.793691   18589 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:12:43.793703   18589 kubeadm.go:322] 
	I0108 20:12:43.793769   18589 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:12:43.793779   18589 kubeadm.go:322] 
	I0108 20:12:43.793862   18589 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:12:43.793869   18589 kubeadm.go:322] 
	I0108 20:12:43.793891   18589 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:12:43.793963   18589 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:12:43.794030   18589 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:12:43.794039   18589 kubeadm.go:322] 
	I0108 20:12:43.794111   18589 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 20:12:43.794121   18589 kubeadm.go:322] 
	I0108 20:12:43.794195   18589 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 20:12:43.794202   18589 kubeadm.go:322] 
	I0108 20:12:43.794264   18589 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:12:43.794359   18589 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:12:43.794457   18589 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:12:43.794470   18589 kubeadm.go:322] 
	I0108 20:12:43.794565   18589 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:12:43.794653   18589 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:12:43.794662   18589 kubeadm.go:322] 
	I0108 20:12:43.794758   18589 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token tgqiiv.1yx1hw794hsivrh3 \
	I0108 20:12:43.794881   18589 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 \
	I0108 20:12:43.794924   18589 kubeadm.go:322] 	--control-plane 
	I0108 20:12:43.794936   18589 kubeadm.go:322] 
	I0108 20:12:43.795009   18589 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:12:43.795020   18589 kubeadm.go:322] 
	I0108 20:12:43.795086   18589 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token tgqiiv.1yx1hw794hsivrh3 \
	I0108 20:12:43.795181   18589 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 
	I0108 20:12:43.795192   18589 cni.go:84] Creating CNI manager for ""
	I0108 20:12:43.795199   18589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 20:12:43.796785   18589 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 20:12:43.798045   18589 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 20:12:43.860928   18589 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 20:12:43.922300   18589 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:12:43.922381   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:43.922393   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=addons-117367 minikube.k8s.io/updated_at=2024_01_08T20_12_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:43.965442   18589 ops.go:34] apiserver oom_adj: -16
	I0108 20:12:44.193854   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:44.694205   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:45.194230   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:45.694911   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:46.194818   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:46.693897   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:47.194305   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:47.694465   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:48.194015   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:48.694669   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:49.194690   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:49.694179   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:50.194272   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:50.694715   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:51.194112   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:51.694271   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:52.194597   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:52.694898   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:53.194821   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:53.694869   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:54.193902   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:54.694704   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:55.193924   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:55.694773   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:56.194577   18589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:12:56.353374   18589 kubeadm.go:1088] duration metric: took 12.431057104s to wait for elevateKubeSystemPrivileges.
	I0108 20:12:56.353421   18589 kubeadm.go:406] StartCluster complete in 24.942797318s
	I0108 20:12:56.353440   18589 settings.go:142] acquiring lock: {Name:mk91d3baf51872e4bb0758b94fca7c7249bb9666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:56.353565   18589 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:12:56.353904   18589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/kubeconfig: {Name:mkeb2e8a20e31c0c2d5c7e8214a27af3141300ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:12:56.354067   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:12:56.354143   18589 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 20:12:56.354254   18589 addons.go:69] Setting yakd=true in profile "addons-117367"
	I0108 20:12:56.354261   18589 addons.go:69] Setting metrics-server=true in profile "addons-117367"
	I0108 20:12:56.354276   18589 addons.go:69] Setting cloud-spanner=true in profile "addons-117367"
	I0108 20:12:56.354281   18589 addons.go:237] Setting addon metrics-server=true in "addons-117367"
	I0108 20:12:56.354287   18589 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-117367"
	I0108 20:12:56.354290   18589 addons.go:237] Setting addon cloud-spanner=true in "addons-117367"
	I0108 20:12:56.354299   18589 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-117367"
	I0108 20:12:56.354298   18589 config.go:182] Loaded profile config "addons-117367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:12:56.354305   18589 addons.go:69] Setting ingress=true in profile "addons-117367"
	I0108 20:12:56.354328   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.354334   18589 addons.go:237] Setting addon ingress=true in "addons-117367"
	I0108 20:12:56.354336   18589 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-117367"
	I0108 20:12:56.354342   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.354347   18589 addons.go:69] Setting storage-provisioner=true in profile "addons-117367"
	I0108 20:12:56.354360   18589 addons.go:237] Setting addon storage-provisioner=true in "addons-117367"
	I0108 20:12:56.354353   18589 addons.go:69] Setting ingress-dns=true in profile "addons-117367"
	I0108 20:12:56.354370   18589 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-117367"
	I0108 20:12:56.354375   18589 addons.go:237] Setting addon ingress-dns=true in "addons-117367"
	I0108 20:12:56.354384   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.354398   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.354409   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.354414   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.354455   18589 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-117367"
	I0108 20:12:56.354469   18589 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-117367"
	I0108 20:12:56.354750   18589 addons.go:69] Setting default-storageclass=true in profile "addons-117367"
	I0108 20:12:56.354763   18589 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-117367"
	I0108 20:12:56.354762   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.354257   18589 addons.go:69] Setting gcp-auth=true in profile "addons-117367"
	I0108 20:12:56.354806   18589 mustload.go:65] Loading cluster: addons-117367
	I0108 20:12:56.354816   18589 addons.go:69] Setting volumesnapshots=true in profile "addons-117367"
	I0108 20:12:56.354819   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.354828   18589 addons.go:237] Setting addon volumesnapshots=true in "addons-117367"
	I0108 20:12:56.354829   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.354845   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.354858   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.354893   18589 addons.go:69] Setting helm-tiller=true in profile "addons-117367"
	I0108 20:12:56.354904   18589 addons.go:237] Setting addon helm-tiller=true in "addons-117367"
	I0108 20:12:56.354936   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.355022   18589 config.go:182] Loaded profile config "addons-117367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:12:56.355037   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.355057   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.355107   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.355124   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.355183   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.355213   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.355251   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.355268   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.355363   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.355404   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.354763   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.355530   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.355601   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.355626   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.354278   18589 addons.go:237] Setting addon yakd=true in "addons-117367"
	I0108 20:12:56.355673   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.356038   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.356057   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.356214   18589 addons.go:69] Setting inspektor-gadget=true in profile "addons-117367"
	I0108 20:12:56.356310   18589 addons.go:237] Setting addon inspektor-gadget=true in "addons-117367"
	I0108 20:12:56.356406   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.356881   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.356930   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.354328   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.357066   18589 addons.go:69] Setting registry=true in profile "addons-117367"
	I0108 20:12:56.357089   18589 addons.go:237] Setting addon registry=true in "addons-117367"
	I0108 20:12:56.357126   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.357458   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.357472   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.357486   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.357492   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.360225   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.360274   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.375271   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41883
	I0108 20:12:56.375815   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.376030   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42243
	I0108 20:12:56.376399   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.376429   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.376532   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.376767   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.376807   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34059
	I0108 20:12:56.377031   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45587
	I0108 20:12:56.377317   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.377353   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.377372   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.377386   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.377442   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.377651   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.377869   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.377900   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.377961   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.378251   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.378275   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.378784   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.378801   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.378884   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.378939   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.379489   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.380219   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.380597   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.380635   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.380765   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.380798   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.381560   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45749
	I0108 20:12:56.381901   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.382483   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.382506   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.382821   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.382888   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41339
	I0108 20:12:56.383041   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.383356   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.383776   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.383797   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.384104   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.384727   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.384763   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.386179   18589 addons.go:237] Setting addon default-storageclass=true in "addons-117367"
	I0108 20:12:56.386215   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.386499   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.386520   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.386850   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.386874   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.401280   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0108 20:12:56.402292   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.402811   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.402831   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.403212   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.403442   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.405434   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.408286   18589 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 20:12:56.410070   18589 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:12:56.411827   18589 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:12:56.413619   18589 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 20:12:56.413642   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 20:12:56.413670   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.414545   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33501
	I0108 20:12:56.415116   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.415878   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.415903   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.416300   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.416587   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45811
	I0108 20:12:56.417148   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.417237   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.417684   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.417700   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.417757   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.417775   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.417803   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0108 20:12:56.417942   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.418304   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.418365   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.418363   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.419017   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.419062   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.419260   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.419405   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.419417   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.419676   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.420658   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.421013   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.421061   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.421378   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.426976   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I0108 20:12:56.427155   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44337
	I0108 20:12:56.427657   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.427758   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.428266   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.428293   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.428824   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.429541   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.429580   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.430521   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.430540   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.431010   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.431180   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.433393   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.436346   18589 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0108 20:12:56.438177   18589 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0108 20:12:56.438197   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0108 20:12:56.438228   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.436136   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I0108 20:12:56.439231   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0108 20:12:56.439682   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.439784   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.440365   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.440386   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.440541   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.440551   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.440939   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.441581   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.441623   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.442099   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
	I0108 20:12:56.442543   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.442664   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.442859   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.442906   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.443131   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45521
	I0108 20:12:56.443285   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.443316   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.443665   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.443838   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.443979   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.444081   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.444312   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.444911   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.444959   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.444977   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.445052   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.445065   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.447984   18589 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 20:12:56.446373   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.446376   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.446392   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37051
	I0108 20:12:56.448856   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42275
	I0108 20:12:56.450595   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44179
	I0108 20:12:56.450707   18589 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 20:12:56.450723   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 20:12:56.450747   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.451679   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.451711   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.451679   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.451751   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.451728   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.452274   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.452296   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.452433   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.452449   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.452822   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.452845   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.452918   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.452960   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.453228   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.453654   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.453678   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.453840   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.454556   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.454593   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.454676   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.454712   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.457004   18589 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 20:12:56.455600   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.455631   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0108 20:12:56.456559   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.458142   18589 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-117367"
	I0108 20:12:56.458860   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0108 20:12:56.461587   18589 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 20:12:56.459737   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.459833   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:12:56.460145   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.460337   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.461326   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0108 20:12:56.461738   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.464553   18589 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 20:12:56.463291   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.463317   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.463714   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.463749   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.463860   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.463969   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.464986   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0108 20:12:56.467107   18589 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 20:12:56.466000   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.466008   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.466022   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.466187   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.466438   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.466577   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0108 20:12:56.467228   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.470517   18589 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 20:12:56.469528   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.469755   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.470023   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.470087   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.470392   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.472675   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.474546   18589 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 20:12:56.473182   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.473196   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.473686   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.473706   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.473908   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.476169   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.477927   18589 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 20:12:56.476422   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.477009   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.477045   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.477351   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.478078   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.478174   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.478934   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0108 20:12:56.481265   18589 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 20:12:56.479187   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.479994   18589 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 20:12:56.480327   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.480639   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.482930   18589 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 20:12:56.485036   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 20:12:56.485055   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.487363   18589 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:12:56.483511   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.483719   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I0108 20:12:56.485012   18589 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 20:12:56.485014   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.489209   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.489215   18589 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:12:56.489233   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:12:56.489256   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.489314   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 20:12:56.489327   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.490273   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.490342   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.492141   18589 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 20:12:56.493639   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.492165   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I0108 20:12:56.493644   18589 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 20:12:56.493804   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 20:12:56.493829   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.490840   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.490586   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.493345   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.493945   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.493967   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.493994   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.494000   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.494016   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.494206   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.494384   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.494428   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.494449   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.494558   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.495106   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.495436   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.495606   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.495612   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.495626   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.496131   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.496196   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.496272   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.496722   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.496737   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.497047   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.497120   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.497166   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.497223   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.497343   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.499992   18589 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 20:12:56.497946   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.498133   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44739
	I0108 20:12:56.499263   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.499465   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.499601   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.500998   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.502401   18589 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 20:12:56.502411   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 20:12:56.502428   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.502567   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.502585   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.502636   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.502681   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.503613   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45029
	I0108 20:12:56.505041   18589 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 20:12:56.503683   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.503616   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.503712   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.503981   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.504570   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46545
	I0108 20:12:56.505830   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.506399   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.506440   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I0108 20:12:56.506537   18589 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 20:12:56.506644   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.507784   18589 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 20:12:56.509359   18589 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 20:12:56.509372   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 20:12:56.509383   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.507824   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.508121   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 20:12:56.509456   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.508349   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.509455   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.508644   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.508646   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.508783   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.509479   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.508592   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.509598   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.509893   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.509916   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.509981   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.510008   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.510292   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.510303   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.510502   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.510716   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:12:56.510751   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:12:56.510757   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.510776   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.510828   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.511109   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.512274   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.512490   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.512896   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.513299   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.515023   18589 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 20:12:56.513991   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.514005   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.514018   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.514833   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.515057   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.514980   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.515311   18589 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:12:56.515338   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.515423   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.515646   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.516617   18589 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 20:12:56.516634   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:12:56.516833   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.518145   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.518166   18589 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 20:12:56.519668   18589 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 20:12:56.519683   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 20:12:56.519695   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.518185   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.521125   18589 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 20:12:56.521144   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 20:12:56.521162   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.518350   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.518359   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.521959   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.522157   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.524231   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.524487   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.524658   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.524685   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.524796   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.524820   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.524853   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.525032   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.525033   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.525172   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.525221   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.525293   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.525343   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.525902   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:12:56.525938   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.526301   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.526316   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.526545   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.526690   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.526866   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.526970   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	W0108 20:12:56.527149   18589 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33386->192.168.39.205:22: read: connection reset by peer
	I0108 20:12:56.527170   18589 retry.go:31] will retry after 149.901321ms: ssh: handshake failed: read tcp 192.168.39.1:33386->192.168.39.205:22: read: connection reset by peer
	I0108 20:12:56.534503   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44629
	I0108 20:12:56.534942   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:12:56.535426   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:12:56.535443   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:12:56.535806   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:12:56.535993   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:12:56.537741   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:12:56.540055   18589 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 20:12:56.542641   18589 out.go:177]   - Using image docker.io/busybox:stable
	I0108 20:12:56.545468   18589 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 20:12:56.545488   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 20:12:56.545512   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:12:56.548516   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.549122   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:12:56.549168   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:12:56.549319   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:12:56.549504   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:12:56.549644   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:12:56.549767   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	W0108 20:12:56.551247   18589 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33402->192.168.39.205:22: read: connection reset by peer
	I0108 20:12:56.551268   18589 retry.go:31] will retry after 155.505642ms: ssh: handshake failed: read tcp 192.168.39.1:33402->192.168.39.205:22: read: connection reset by peer
	W0108 20:12:56.678541   18589 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33410->192.168.39.205:22: read: connection reset by peer
	I0108 20:12:56.678571   18589 retry.go:31] will retry after 399.291163ms: ssh: handshake failed: read tcp 192.168.39.1:33410->192.168.39.205:22: read: connection reset by peer
	I0108 20:12:56.800141   18589 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0108 20:12:56.800160   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0108 20:12:56.846532   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 20:12:56.918875   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:12:56.960997   18589 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-117367" context rescaled to 1 replicas
	I0108 20:12:56.961050   18589 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:12:56.963031   18589 out.go:177] * Verifying Kubernetes components...
	I0108 20:12:56.964658   18589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:12:57.031517   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:12:57.035728   18589 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 20:12:57.035757   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 20:12:57.083484   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:12:57.117582   18589 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 20:12:57.117605   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 20:12:57.161261   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 20:12:57.163212   18589 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 20:12:57.163228   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0108 20:12:57.163802   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 20:12:57.187288   18589 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 20:12:57.187318   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 20:12:57.188785   18589 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 20:12:57.188802   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 20:12:57.190673   18589 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 20:12:57.190687   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 20:12:57.191519   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 20:12:57.196450   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 20:12:57.205056   18589 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 20:12:57.205078   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 20:12:57.231421   18589 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 20:12:57.231449   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 20:12:57.442001   18589 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 20:12:57.442033   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 20:12:57.445110   18589 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 20:12:57.445133   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 20:12:57.484063   18589 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 20:12:57.484087   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 20:12:57.485792   18589 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 20:12:57.485815   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 20:12:57.498011   18589 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 20:12:57.498034   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 20:12:57.515001   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 20:12:57.536567   18589 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 20:12:57.536591   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 20:12:57.574978   18589 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 20:12:57.575006   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 20:12:57.680008   18589 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 20:12:57.680030   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 20:12:57.682388   18589 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 20:12:57.682406   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 20:12:57.694856   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 20:12:57.696950   18589 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 20:12:57.696967   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 20:12:57.757643   18589 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 20:12:57.757685   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 20:12:57.770172   18589 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 20:12:57.770197   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 20:12:57.830562   18589 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:12:57.830591   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 20:12:57.849723   18589 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 20:12:57.849748   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 20:12:57.891569   18589 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 20:12:57.891594   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 20:12:57.893802   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 20:12:57.918313   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 20:12:57.929296   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:12:57.943583   18589 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 20:12:57.943606   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 20:12:58.009385   18589 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 20:12:58.009415   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 20:12:58.059580   18589 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 20:12:58.059605   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 20:12:58.114036   18589 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 20:12:58.114064   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 20:12:58.145646   18589 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 20:12:58.145671   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 20:12:58.222809   18589 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 20:12:58.222839   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 20:12:58.244861   18589 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 20:12:58.244889   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 20:12:58.302398   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 20:12:58.306591   18589 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 20:12:58.306625   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 20:12:58.347793   18589 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 20:12:58.347817   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 20:12:58.392632   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 20:13:03.126208   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.279637573s)
	I0108 20:13:03.126239   18589 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.207334297s)
	I0108 20:13:03.126265   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:03.126264   18589 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 20:13:03.126278   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:03.126328   18589 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (6.161638688s)
	I0108 20:13:03.126424   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.094877416s)
	I0108 20:13:03.126458   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:03.126473   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:03.126536   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:03.126555   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:03.126573   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:03.126587   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:03.126600   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:03.126758   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:03.126755   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:03.126785   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:03.126801   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:03.126812   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:03.126930   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:03.126965   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:03.126978   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:03.127142   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:03.127159   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:03.127144   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:03.153221   18589 node_ready.go:35] waiting up to 6m0s for node "addons-117367" to be "Ready" ...
	I0108 20:13:03.404945   18589 node_ready.go:49] node "addons-117367" has status "Ready":"True"
	I0108 20:13:03.404976   18589 node_ready.go:38] duration metric: took 251.713372ms waiting for node "addons-117367" to be "Ready" ...
	I0108 20:13:03.404989   18589 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:13:03.500185   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:03.500234   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:03.500626   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:03.500694   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:03.500696   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:03.580538   18589 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:04.578792   18589 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 20:13:04.578832   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:13:04.582150   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:13:04.582603   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:13:04.582625   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:13:04.582840   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:13:04.583063   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:13:04.583243   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:13:04.583403   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:13:04.874029   18589 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 20:13:04.905215   18589 addons.go:237] Setting addon gcp-auth=true in "addons-117367"
	I0108 20:13:04.905261   18589 host.go:66] Checking if "addons-117367" exists ...
	I0108 20:13:04.905583   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:13:04.905623   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:13:04.921380   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0108 20:13:04.921905   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:13:04.922479   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:13:04.922501   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:13:04.922781   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:13:04.923401   18589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:13:04.923454   18589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:13:04.938641   18589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0108 20:13:04.939097   18589 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:13:04.939627   18589 main.go:141] libmachine: Using API Version  1
	I0108 20:13:04.939656   18589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:13:04.940122   18589 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:13:04.940345   18589 main.go:141] libmachine: (addons-117367) Calling .GetState
	I0108 20:13:04.941886   18589 main.go:141] libmachine: (addons-117367) Calling .DriverName
	I0108 20:13:04.942119   18589 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 20:13:04.942145   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHHostname
	I0108 20:13:04.944791   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:13:04.945167   18589 main.go:141] libmachine: (addons-117367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:96:f3", ip: ""} in network mk-addons-117367: {Iface:virbr1 ExpiryTime:2024-01-08 21:12:15 +0000 UTC Type:0 Mac:52:54:00:12:96:f3 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-117367 Clientid:01:52:54:00:12:96:f3}
	I0108 20:13:04.945203   18589 main.go:141] libmachine: (addons-117367) DBG | domain addons-117367 has defined IP address 192.168.39.205 and MAC address 52:54:00:12:96:f3 in network mk-addons-117367
	I0108 20:13:04.945330   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHPort
	I0108 20:13:04.945536   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHKeyPath
	I0108 20:13:04.945712   18589 main.go:141] libmachine: (addons-117367) Calling .GetSSHUsername
	I0108 20:13:04.945883   18589 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/addons-117367/id_rsa Username:docker}
	I0108 20:13:05.266131   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.1048395s)
	I0108 20:13:05.266189   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:05.266203   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:05.266236   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.182712675s)
	I0108 20:13:05.266250   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.102414607s)
	I0108 20:13:05.266276   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:05.266282   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:05.266289   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:05.266306   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.074766429s)
	I0108 20:13:05.266331   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:05.266309   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:05.266342   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:05.266456   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:05.266526   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:05.266568   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:05.266578   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:05.266587   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:05.266561   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:05.266600   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:05.266607   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:05.266617   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:05.266649   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:05.266710   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:05.266757   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:05.266767   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:05.266777   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:05.266786   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:05.266995   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:05.267011   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:05.267022   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:05.267031   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:05.267104   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:05.267131   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:05.267147   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:05.267178   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:05.267186   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:05.267248   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:05.267263   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:05.268464   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:05.268516   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:05.268535   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:05.268734   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:05.268749   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:05.426131   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:05.426151   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:05.426444   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:05.426471   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:05.695880   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:06.855102   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.340060834s)
	I0108 20:13:06.855106   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.658622262s)
	I0108 20:13:06.855153   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.855171   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.855183   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.855198   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.855199   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.160310342s)
	I0108 20:13:06.855219   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.855234   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.855296   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.961460434s)
	I0108 20:13:06.855330   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.855342   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.855341   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.936981821s)
	I0108 20:13:06.855419   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.855428   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.855507   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.926123031s)
	W0108 20:13:06.855539   18589 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 20:13:06.855558   18589 retry.go:31] will retry after 354.999955ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 20:13:06.855539   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.553096473s)
	I0108 20:13:06.855587   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.855597   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.855598   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:06.855629   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.855638   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.855646   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:06.855648   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.855656   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.855667   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.855675   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.855683   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.855691   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.856142   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:06.856166   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:06.856191   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.856199   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.856259   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.856268   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.856277   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.856286   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.856407   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.856425   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.856432   18589 addons.go:473] Verifying addon ingress=true in "addons-117367"
	I0108 20:13:06.859096   18589 out.go:177] * Verifying ingress addon...
	I0108 20:13:06.857504   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:06.857516   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.857523   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:06.857537   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:06.857552   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.857572   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.857819   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:06.857851   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.860579   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.860597   18589 addons.go:473] Verifying addon metrics-server=true in "addons-117367"
	I0108 20:13:06.860620   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.860634   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.860619   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.860641   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.860661   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.860643   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.860668   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:06.860674   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.860678   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:06.860890   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.860907   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.861005   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:06.861005   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:06.861015   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.861026   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.861029   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:06.861037   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:06.861054   18589 addons.go:473] Verifying addon registry=true in "addons-117367"
	I0108 20:13:06.863712   18589 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-117367 service yakd-dashboard -n yakd-dashboard
	
	
	I0108 20:13:06.861530   18589 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 20:13:06.865379   18589 out.go:177] * Verifying registry addon...
	I0108 20:13:06.867748   18589 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 20:13:06.892528   18589 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 20:13:06.892557   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:06.925877   18589 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 20:13:06.925907   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:07.210804   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 20:13:07.387948   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:07.394994   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:07.779897   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.387199651s)
	I0108 20:13:07.779960   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:07.779969   18589 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.837826235s)
	I0108 20:13:07.782214   18589 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 20:13:07.779977   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:07.785447   18589 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 20:13:07.784177   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:07.784234   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:07.787425   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:07.787447   18589 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 20:13:07.787455   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:07.787461   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 20:13:07.787469   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:07.787837   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:07.787864   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:07.787876   18589 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-117367"
	I0108 20:13:07.789584   18589 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 20:13:07.787842   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:07.791991   18589 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 20:13:07.869096   18589 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 20:13:07.869118   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:07.971026   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:07.971081   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:07.985860   18589 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 20:13:07.985888   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 20:13:08.123525   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:08.292613   18589 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 20:13:08.292636   18589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 20:13:08.314060   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:08.367311   18589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 20:13:08.420896   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:08.422385   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:08.831998   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:08.887968   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:08.931105   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:09.299217   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:09.478720   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:09.504537   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:09.828172   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:09.896482   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:09.899145   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:09.943823   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.732964275s)
	I0108 20:13:09.943899   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:09.943913   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:09.944193   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:09.944247   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:09.944264   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:09.944280   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:09.944555   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:09.944603   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:09.944624   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:10.340080   18589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.972726505s)
	I0108 20:13:10.340156   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:10.340172   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:10.340448   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:10.340467   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:10.340485   18589 main.go:141] libmachine: Making call to close driver server
	I0108 20:13:10.340542   18589 main.go:141] libmachine: (addons-117367) Calling .Close
	I0108 20:13:10.340888   18589 main.go:141] libmachine: (addons-117367) DBG | Closing plugin on server side
	I0108 20:13:10.340888   18589 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:13:10.340915   18589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:13:10.341829   18589 addons.go:473] Verifying addon gcp-auth=true in "addons-117367"
	I0108 20:13:10.343919   18589 out.go:177] * Verifying gcp-auth addon...
	I0108 20:13:10.346601   18589 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 20:13:10.358708   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:10.362305   18589 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 20:13:10.362335   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:10.374512   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:10.382123   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:10.590671   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:10.812973   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:10.853161   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:10.897142   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:10.900016   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:11.309368   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:11.357542   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:11.371851   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:11.374153   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:11.799156   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:11.854034   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:11.871578   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:11.872965   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:12.302830   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:12.353569   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:12.393774   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:12.396567   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:12.798406   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:12.850953   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:12.872457   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:12.874468   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:13.095709   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:13.302585   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:13.353890   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:13.370274   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:13.372799   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:13.798923   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:13.851885   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:13.870762   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:13.873108   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:14.303655   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:14.355564   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:14.372765   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:14.375225   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:14.798045   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:14.852668   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:14.869955   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:14.873984   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:15.300547   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:15.352560   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:15.379207   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:15.379574   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:15.599062   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:15.807834   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:15.853460   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:15.885948   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:15.893897   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:16.297903   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:16.372739   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:16.391305   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:16.391570   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:16.814015   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:16.864510   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:16.885002   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:16.885125   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:17.298183   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:17.352678   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:17.369855   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:17.373367   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:17.808958   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:17.852963   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:17.877185   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:17.877813   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:18.091632   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:18.297983   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:18.350357   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:18.371629   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:18.376942   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:18.802440   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:18.852718   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:18.879619   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:18.882563   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:19.298357   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:19.352700   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:19.386322   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:19.399557   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:19.798482   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:19.856406   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:19.878730   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:19.878925   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:20.099090   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:20.306582   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:20.361882   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:20.381307   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:20.387790   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:20.798452   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:20.850524   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:20.875977   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:20.876305   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:21.301444   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:21.362485   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:21.373067   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:21.376171   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:21.905873   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:21.906308   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:21.918859   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:21.919033   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:22.298527   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:22.359823   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:22.370903   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:22.379835   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:22.587567   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:22.799670   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:22.850851   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:22.870770   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:22.872706   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:23.302604   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:23.351673   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:23.376677   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:23.376690   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:23.798880   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:23.850836   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:23.872781   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:23.875027   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:24.300173   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:24.350919   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:24.376176   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:24.380853   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:24.797751   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:24.851018   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:24.872027   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:24.874220   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:25.087734   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:25.302166   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:25.352547   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:25.371014   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:25.373618   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:25.811705   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:25.851967   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:25.870095   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:25.873718   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:26.298843   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:26.352477   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:26.373892   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:26.376422   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:26.798659   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:26.850866   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:26.870257   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:26.874031   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:27.298469   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:27.350559   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:27.370975   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:27.374413   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:27.588511   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:27.800038   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:27.851082   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:27.870933   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:27.873637   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:28.305267   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:28.351183   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:28.371042   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:28.373786   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:28.799776   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:28.851155   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:28.871369   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:28.874082   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:29.559471   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:29.561504   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:29.563986   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:29.572719   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:29.708288   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:29.798825   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:29.850901   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:29.873942   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:29.878492   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:30.298195   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:30.351492   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:30.370817   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:30.373809   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:30.798820   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:30.851624   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:30.870534   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:30.873634   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:31.298144   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:31.351872   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:31.370261   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:31.373012   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:31.797499   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:31.852945   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:31.873790   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:31.874660   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:32.089198   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:32.298953   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:32.350991   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:32.370565   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:32.376279   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:32.799112   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:32.851429   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:32.871155   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:32.880651   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:33.300138   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:33.351710   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:33.375341   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:33.375426   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:33.801283   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:33.851035   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:33.869977   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:33.874003   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:34.305382   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:34.351173   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:34.371828   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:34.374583   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:34.589725   18589 pod_ready.go:102] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:34.798849   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:34.851191   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:34.872336   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:34.873742   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:35.303129   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:35.352359   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:35.373142   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:35.399455   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:35.841203   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:35.853009   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:35.898746   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:35.899074   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:36.299073   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:36.351601   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:36.372240   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:36.377029   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:36.801669   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:36.851625   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:36.874343   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:36.887945   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:37.087842   18589 pod_ready.go:92] pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace has status "Ready":"True"
	I0108 20:13:37.087870   18589 pod_ready.go:81] duration metric: took 33.507298211s waiting for pod "coredns-5dd5756b68-l64bf" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.087884   18589 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pfhbz" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.090413   18589 pod_ready.go:97] error getting pod "coredns-5dd5756b68-pfhbz" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pfhbz" not found
	I0108 20:13:37.090435   18589 pod_ready.go:81] duration metric: took 2.543491ms waiting for pod "coredns-5dd5756b68-pfhbz" in "kube-system" namespace to be "Ready" ...
	E0108 20:13:37.090446   18589 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-pfhbz" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-pfhbz" not found
	I0108 20:13:37.090455   18589 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-117367" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.095551   18589 pod_ready.go:92] pod "etcd-addons-117367" in "kube-system" namespace has status "Ready":"True"
	I0108 20:13:37.095573   18589 pod_ready.go:81] duration metric: took 5.110986ms waiting for pod "etcd-addons-117367" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.095584   18589 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-117367" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.101195   18589 pod_ready.go:92] pod "kube-apiserver-addons-117367" in "kube-system" namespace has status "Ready":"True"
	I0108 20:13:37.101213   18589 pod_ready.go:81] duration metric: took 5.621471ms waiting for pod "kube-apiserver-addons-117367" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.101221   18589 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-117367" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.107489   18589 pod_ready.go:92] pod "kube-controller-manager-addons-117367" in "kube-system" namespace has status "Ready":"True"
	I0108 20:13:37.107508   18589 pod_ready.go:81] duration metric: took 6.280926ms waiting for pod "kube-controller-manager-addons-117367" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.107516   18589 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x9wjt" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.286585   18589 pod_ready.go:92] pod "kube-proxy-x9wjt" in "kube-system" namespace has status "Ready":"True"
	I0108 20:13:37.286614   18589 pod_ready.go:81] duration metric: took 179.09154ms waiting for pod "kube-proxy-x9wjt" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.286624   18589 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-117367" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.299435   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:37.351666   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:37.371635   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:37.374658   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:37.685426   18589 pod_ready.go:92] pod "kube-scheduler-addons-117367" in "kube-system" namespace has status "Ready":"True"
	I0108 20:13:37.685451   18589 pod_ready.go:81] duration metric: took 398.820617ms waiting for pod "kube-scheduler-addons-117367" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.685461   18589 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace to be "Ready" ...
	I0108 20:13:37.798721   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:37.850705   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:37.871830   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:37.875439   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:38.298377   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:38.352129   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:38.370625   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:38.372616   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:38.798961   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:38.851989   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:38.870332   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:38.873419   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:39.302391   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:39.351040   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:39.372646   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:39.375890   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:39.692208   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:39.799995   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:39.851101   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:39.872897   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:39.875228   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:40.298616   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:40.350986   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:40.370917   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:40.373201   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:40.802071   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:40.851624   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:40.872425   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:40.878696   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:41.298492   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:41.351099   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:41.371990   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:41.375089   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:41.692738   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:41.799786   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:41.852488   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:41.873936   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:41.876428   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:42.323957   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:42.351743   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:42.369982   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:42.375332   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:42.797246   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:42.851626   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:42.870072   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:42.873317   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:43.308201   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:43.354578   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:43.372981   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:43.375684   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:43.693254   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:43.799238   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:43.851788   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:43.870621   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:43.875500   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:44.299358   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:44.350882   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:44.370708   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:44.373083   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:44.800054   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:44.851301   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:44.871574   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:44.873578   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:45.299184   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:45.351529   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:45.371140   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:45.374201   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:45.694400   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:45.800060   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:45.853501   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:45.871263   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:45.874666   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:46.299217   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:46.352453   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:46.371176   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:46.375755   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:46.798652   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:46.851887   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:46.874943   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:46.875221   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:47.299562   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:47.351193   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:47.371337   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:47.375714   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:47.798404   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:47.850754   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:47.869820   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:47.872694   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:48.192417   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:48.298981   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:48.353349   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:48.371406   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:48.374467   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:48.799685   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:48.852473   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:48.871575   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:48.873654   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:49.299495   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:49.351820   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:49.369919   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:49.375972   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:49.798509   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:49.850379   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:49.870660   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:49.873779   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:50.198382   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:50.299976   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:50.352353   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:50.371239   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:50.383529   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:50.799407   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:50.850689   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:50.871004   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:50.873747   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:51.298963   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:51.351336   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:51.370647   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:51.376174   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:51.798826   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:51.852049   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:51.870989   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:51.874033   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:52.300831   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:52.351801   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:52.371084   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:52.373756   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:52.693737   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:52.799339   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:52.852420   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:52.870385   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:52.874324   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:53.303456   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:53.351824   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:53.371685   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:53.373620   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:53.800210   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:53.852562   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:53.872782   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:53.874771   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:54.297312   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:54.353254   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:54.370708   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:54.373900   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:54.798386   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:54.850589   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:54.871954   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:54.873978   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:55.193980   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:55.298504   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:55.351440   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:55.373349   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:55.374022   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:55.798938   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:55.852870   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:55.869797   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:55.873302   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:56.297994   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:56.351652   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:56.370159   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:56.374055   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:56.799429   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:56.851380   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:56.875766   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:56.876143   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:57.298859   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:57.351164   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:57.371803   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:57.373260   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:57.693000   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:57.799769   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:57.851351   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:57.870478   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:57.876612   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:58.298105   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:58.353561   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:58.372292   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:58.374063   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:58.803959   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:58.851225   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:58.871287   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:58.873652   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:59.298632   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:59.351424   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:59.371161   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:59.373216   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:13:59.693077   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:13:59.798978   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:13:59.851921   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:13:59.870413   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:13:59.873535   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:00.298540   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:00.351207   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:00.370610   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:00.373122   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:00.799184   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:00.851867   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:00.870911   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:00.873975   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:01.298511   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:01.351239   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:01.370751   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:01.373085   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:01.694642   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:01.798443   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:01.851721   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:01.870037   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:01.873495   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:02.298769   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:02.351460   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:02.371498   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:02.373623   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:02.798584   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:02.852322   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:02.871165   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:02.874513   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:03.298558   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:03.351592   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:03.371700   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:03.373746   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:03.798534   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:03.853031   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:03.870721   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:03.876866   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:04.193232   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:04.297947   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:04.351787   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:04.371309   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:04.373890   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:04.798452   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:04.850923   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:04.870755   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:04.874712   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:05.298415   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:05.350772   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:05.370189   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:05.373698   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:05.803625   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:05.851027   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:05.870848   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:05.877570   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:06.193331   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:06.301724   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:06.352525   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:06.372376   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:06.374354   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:06.799263   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:06.851391   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:06.871105   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:06.872934   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:07.300940   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:07.351262   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:07.377081   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:07.377111   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:07.800722   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:07.852157   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:07.870608   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:07.873500   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:08.198290   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:08.304702   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:08.351715   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:08.370772   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:08.374851   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:08.801385   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:08.850509   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:08.871628   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:08.873409   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:09.299597   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:09.351149   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:09.371789   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:09.375005   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:09.803422   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:09.851904   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:09.871337   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:09.875191   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:10.201193   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:10.712246   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:10.715757   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:10.718026   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:10.719717   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:10.798339   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:10.851832   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:10.871756   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:10.873016   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:11.298240   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:11.351498   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:11.370478   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:11.378249   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:11.798636   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:11.851832   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:11.870563   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:11.873672   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:12.299099   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:12.350943   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:12.370105   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:12.374338   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:12.696505   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:12.800004   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:12.851328   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:12.871825   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:12.874125   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:13.298368   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:13.350935   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:13.370280   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:13.373587   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:13.802388   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:13.855664   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:13.875439   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:13.875818   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:14.300915   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:14.351372   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:14.373666   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:14.376570   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:14.808027   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:14.850899   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:14.870541   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:14.878553   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:15.193279   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:15.297993   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:15.443311   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:15.443341   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:15.444284   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:15.802699   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:15.855058   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:15.872803   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:15.873360   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:16.298603   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:16.350964   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:16.370604   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:16.373542   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:16.798537   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:16.850689   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:16.870937   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:16.873873   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:17.196014   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:17.299905   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:17.352485   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:17.370821   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:17.373630   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:17.797732   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:17.852053   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:17.872006   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:17.874554   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:18.311308   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:18.369405   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:18.376104   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:18.386150   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:18.798966   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:18.851395   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:18.874519   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:18.874779   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:19.299269   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:19.365682   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:19.387448   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:19.390021   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:19.693490   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:19.798018   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:19.851383   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:19.870980   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:19.873859   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:20.298814   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:20.351964   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:20.370226   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:20.374441   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:20.964195   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:20.964513   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:20.965940   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:20.967264   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:21.302869   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:21.351636   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:21.370991   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:21.374871   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:21.693604   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:21.800891   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:21.851928   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:21.872861   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:21.877274   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:22.301072   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:22.351067   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:22.370666   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:22.373320   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:22.798284   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:22.853150   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:22.884195   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:22.884383   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:23.297868   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:23.354288   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:23.375531   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:23.382868   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:23.798818   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:23.853042   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:23.870661   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:23.874003   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:24.193035   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:24.299282   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:24.352215   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:24.372267   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:24.374214   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:24.799057   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:24.851169   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:24.870465   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:24.873065   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:25.297722   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:25.351286   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:25.380852   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:25.383110   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:25.799489   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:25.851131   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:25.870629   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:25.873137   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:26.202791   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:26.298168   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:26.351226   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:26.371698   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:26.374318   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:26.802080   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:26.851197   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:26.870324   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:26.873012   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:27.299082   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:27.487521   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:27.487736   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:27.491222   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:27.798960   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:27.850622   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:27.870957   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:27.873224   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:28.205386   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:28.299210   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:28.351322   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:28.371789   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:28.373888   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:28.799564   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:28.850927   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:28.870396   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:28.873652   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:29.300129   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:29.351176   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:29.371686   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:29.373528   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:29.798773   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:29.851050   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:29.872562   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:29.874121   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:30.299219   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:30.351360   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:30.371218   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:30.376248   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:30.694724   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:30.798447   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:30.852229   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:30.871646   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:30.873202   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:31.300376   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:31.352954   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:31.370560   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:31.374983   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:31.800107   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:31.851036   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:31.870742   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:31.874019   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:32.399699   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:32.400722   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:32.404146   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:32.404903   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:32.798795   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:32.850745   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:32.869900   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:32.873134   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:33.194219   18589 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"False"
	I0108 20:14:33.303899   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:33.352735   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:33.371362   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:33.377211   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:33.817847   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:33.851679   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:33.870195   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:33.875337   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:34.298843   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:34.351660   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:34.379141   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:34.380763   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:34.695794   18589 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace has status "Ready":"True"
	I0108 20:14:34.695827   18589 pod_ready.go:81] duration metric: took 57.010359971s waiting for pod "nvidia-device-plugin-daemonset-4czzg" in "kube-system" namespace to be "Ready" ...
	I0108 20:14:34.695837   18589 pod_ready.go:38] duration metric: took 1m31.290833811s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:14:34.695853   18589 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:14:34.695883   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 20:14:34.695936   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 20:14:34.754253   18589 cri.go:89] found id: "1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b"
	I0108 20:14:34.754280   18589 cri.go:89] found id: ""
	I0108 20:14:34.754291   18589 logs.go:284] 1 containers: [1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b]
	I0108 20:14:34.754347   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:34.759185   18589 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 20:14:34.759254   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 20:14:34.798676   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:34.821338   18589 cri.go:89] found id: "098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c"
	I0108 20:14:34.821374   18589 cri.go:89] found id: ""
	I0108 20:14:34.821383   18589 logs.go:284] 1 containers: [098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c]
	I0108 20:14:34.821432   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:34.831858   18589 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 20:14:34.831941   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 20:14:34.851717   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:34.871406   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:34.875927   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:34.946867   18589 cri.go:89] found id: "61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065"
	I0108 20:14:34.946894   18589 cri.go:89] found id: ""
	I0108 20:14:34.946902   18589 logs.go:284] 1 containers: [61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065]
	I0108 20:14:34.946955   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:34.962515   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 20:14:34.962571   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 20:14:35.037438   18589 cri.go:89] found id: "71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8"
	I0108 20:14:35.037463   18589 cri.go:89] found id: ""
	I0108 20:14:35.037472   18589 logs.go:284] 1 containers: [71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8]
	I0108 20:14:35.037534   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:35.047670   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 20:14:35.047748   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 20:14:35.113096   18589 cri.go:89] found id: "2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a"
	I0108 20:14:35.113128   18589 cri.go:89] found id: ""
	I0108 20:14:35.113138   18589 logs.go:284] 1 containers: [2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a]
	I0108 20:14:35.113195   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:35.118657   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 20:14:35.118728   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 20:14:35.201981   18589 cri.go:89] found id: "b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255"
	I0108 20:14:35.202000   18589 cri.go:89] found id: ""
	I0108 20:14:35.202007   18589 logs.go:284] 1 containers: [b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255]
	I0108 20:14:35.202058   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:35.216647   18589 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 20:14:35.216707   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 20:14:35.286979   18589 cri.go:89] found id: ""
	I0108 20:14:35.287000   18589 logs.go:284] 0 containers: []
	W0108 20:14:35.287007   18589 logs.go:286] No container was found matching "kindnet"
	I0108 20:14:35.287015   18589 logs.go:123] Gathering logs for describe nodes ...
	I0108 20:14:35.287028   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 20:14:35.298088   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:35.351254   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:35.371464   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:35.374694   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:35.685123   18589 logs.go:123] Gathering logs for kube-apiserver [1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b] ...
	I0108 20:14:35.685170   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b"
	I0108 20:14:35.802381   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:35.813113   18589 logs.go:123] Gathering logs for coredns [61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065] ...
	I0108 20:14:35.813152   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065"
	I0108 20:14:35.852212   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:35.871007   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:35.873444   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:35.887516   18589 logs.go:123] Gathering logs for kube-scheduler [71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8] ...
	I0108 20:14:35.887554   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8"
	I0108 20:14:35.964916   18589 logs.go:123] Gathering logs for CRI-O ...
	I0108 20:14:35.964951   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 20:14:36.302757   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:36.351966   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:36.373067   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:36.375157   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:36.463217   18589 logs.go:123] Gathering logs for kubelet ...
	I0108 20:14:36.463256   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 20:14:36.541988   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: W0108 20:13:03.065868    1254 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-117367" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:36.542164   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.065927    1254 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-117367" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:36.542292   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: W0108 20:13:03.066871    1254 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:36.542432   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.066921    1254 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:36.556727   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:10 addons-117367 kubelet[1254]: W0108 20:13:10.310198    1254 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	W0108 20:14:36.556905   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:10 addons-117367 kubelet[1254]: E0108 20:13:10.310234    1254 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	I0108 20:14:36.571327   18589 logs.go:123] Gathering logs for etcd [098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c] ...
	I0108 20:14:36.571347   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c"
	I0108 20:14:36.665618   18589 logs.go:123] Gathering logs for kube-proxy [2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a] ...
	I0108 20:14:36.665654   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a"
	I0108 20:14:36.745216   18589 logs.go:123] Gathering logs for kube-controller-manager [b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255] ...
	I0108 20:14:36.745244   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255"
	I0108 20:14:36.803549   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:36.831557   18589 logs.go:123] Gathering logs for container status ...
	I0108 20:14:36.831594   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 20:14:36.851068   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:36.870861   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:36.873489   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:36.901220   18589 logs.go:123] Gathering logs for dmesg ...
	I0108 20:14:36.901256   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 20:14:36.914812   18589 out.go:309] Setting ErrFile to fd 2...
	I0108 20:14:36.914840   18589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 20:14:36.914903   18589 out.go:239] X Problems detected in kubelet:
	W0108 20:14:36.914922   18589 out.go:239]   Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.065927    1254 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-117367" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:36.914937   18589 out.go:239]   Jan 08 20:13:03 addons-117367 kubelet[1254]: W0108 20:13:03.066871    1254 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:36.914948   18589 out.go:239]   Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.066921    1254 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:36.914954   18589 out.go:239]   Jan 08 20:13:10 addons-117367 kubelet[1254]: W0108 20:13:10.310198    1254 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	W0108 20:14:36.914960   18589 out.go:239]   Jan 08 20:13:10 addons-117367 kubelet[1254]: E0108 20:13:10.310234    1254 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	I0108 20:14:36.914965   18589 out.go:309] Setting ErrFile to fd 2...
	I0108 20:14:36.914975   18589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:14:37.303826   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:37.351080   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:37.370980   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:37.373433   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:37.799829   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:37.851884   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:37.871457   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:37.872968   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:38.299181   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:38.351616   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:38.370328   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:38.377633   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:38.798335   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:38.851850   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:38.873801   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:38.875139   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:39.303754   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:39.350807   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:39.371002   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:39.372848   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:39.807731   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:39.851509   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:39.871003   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:39.874085   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:40.303313   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:40.352198   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:40.371692   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:40.373869   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:40.801948   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:40.866312   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:40.877471   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:40.877550   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:41.297952   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:41.351099   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:41.370211   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:41.372824   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:41.798153   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:41.864513   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:41.892158   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 20:14:41.892627   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:42.297920   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:42.351344   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:42.371281   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:42.373364   18589 kapi.go:107] duration metric: took 1m35.505617707s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 20:14:42.802473   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:42.851692   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:42.873725   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:43.299573   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:43.355460   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:43.370941   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:43.805019   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:43.851923   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:43.870071   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:44.306134   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:44.351574   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:44.371040   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:44.799390   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:44.855406   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:44.873389   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:45.298506   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:45.350656   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:45.370307   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:45.803788   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:45.851892   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:45.870841   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:46.300248   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:46.363169   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:46.370912   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:46.805605   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:46.855423   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:46.872320   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:46.915863   18589 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:14:46.949108   18589 api_server.go:72] duration metric: took 1m49.988019405s to wait for apiserver process to appear ...
	I0108 20:14:46.949134   18589 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:14:46.949173   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 20:14:46.949239   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 20:14:47.068048   18589 cri.go:89] found id: "1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b"
	I0108 20:14:47.068074   18589 cri.go:89] found id: ""
	I0108 20:14:47.068083   18589 logs.go:284] 1 containers: [1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b]
	I0108 20:14:47.068151   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:47.087808   18589 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 20:14:47.087897   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 20:14:47.202323   18589 cri.go:89] found id: "098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c"
	I0108 20:14:47.202351   18589 cri.go:89] found id: ""
	I0108 20:14:47.202361   18589 logs.go:284] 1 containers: [098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c]
	I0108 20:14:47.202421   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:47.218877   18589 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 20:14:47.218982   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 20:14:47.304880   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:47.356601   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:47.371081   18589 cri.go:89] found id: "61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065"
	I0108 20:14:47.371102   18589 cri.go:89] found id: ""
	I0108 20:14:47.371109   18589 logs.go:284] 1 containers: [61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065]
	I0108 20:14:47.371156   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:47.371178   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:47.395912   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 20:14:47.396002   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 20:14:47.547128   18589 cri.go:89] found id: "71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8"
	I0108 20:14:47.547155   18589 cri.go:89] found id: ""
	I0108 20:14:47.547165   18589 logs.go:284] 1 containers: [71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8]
	I0108 20:14:47.547228   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:47.555331   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 20:14:47.555411   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 20:14:47.689408   18589 cri.go:89] found id: "2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a"
	I0108 20:14:47.689435   18589 cri.go:89] found id: ""
	I0108 20:14:47.689444   18589 logs.go:284] 1 containers: [2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a]
	I0108 20:14:47.689502   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:47.697722   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 20:14:47.697799   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 20:14:47.747834   18589 cri.go:89] found id: "b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255"
	I0108 20:14:47.747861   18589 cri.go:89] found id: ""
	I0108 20:14:47.747873   18589 logs.go:284] 1 containers: [b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255]
	I0108 20:14:47.747927   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:47.752203   18589 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 20:14:47.752269   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 20:14:47.803224   18589 cri.go:89] found id: ""
	I0108 20:14:47.803245   18589 logs.go:284] 0 containers: []
	W0108 20:14:47.803251   18589 logs.go:286] No container was found matching "kindnet"
	I0108 20:14:47.803259   18589 logs.go:123] Gathering logs for etcd [098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c] ...
	I0108 20:14:47.803273   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c"
	I0108 20:14:47.874732   18589 logs.go:123] Gathering logs for coredns [61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065] ...
	I0108 20:14:47.874771   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065"
	I0108 20:14:47.936599   18589 logs.go:123] Gathering logs for kube-proxy [2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a] ...
	I0108 20:14:47.936646   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a"
	I0108 20:14:47.986695   18589 logs.go:123] Gathering logs for describe nodes ...
	I0108 20:14:47.986724   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 20:14:48.035007   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:48.035186   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:48.040522   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:48.266369   18589 logs.go:123] Gathering logs for dmesg ...
	I0108 20:14:48.266402   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 20:14:48.298391   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:48.309167   18589 logs.go:123] Gathering logs for kube-apiserver [1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b] ...
	I0108 20:14:48.309200   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b"
	I0108 20:14:48.351338   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:48.372187   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:48.407049   18589 logs.go:123] Gathering logs for kube-scheduler [71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8] ...
	I0108 20:14:48.407099   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8"
	I0108 20:14:48.482893   18589 logs.go:123] Gathering logs for kube-controller-manager [b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255] ...
	I0108 20:14:48.482947   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255"
	I0108 20:14:48.579416   18589 logs.go:123] Gathering logs for CRI-O ...
	I0108 20:14:48.579451   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 20:14:48.798844   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:48.850791   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:48.873102   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:49.028758   18589 logs.go:123] Gathering logs for container status ...
	I0108 20:14:49.028792   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 20:14:49.298610   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:49.350772   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:49.363010   18589 logs.go:123] Gathering logs for kubelet ...
	I0108 20:14:49.363043   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 20:14:49.370068   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0108 20:14:49.431099   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: W0108 20:13:03.065868    1254 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-117367" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:49.431293   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.065927    1254 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-117367" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:49.431537   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: W0108 20:13:03.066871    1254 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:49.431738   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.066921    1254 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:49.446236   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:10 addons-117367 kubelet[1254]: W0108 20:13:10.310198    1254 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	W0108 20:14:49.446438   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:10 addons-117367 kubelet[1254]: E0108 20:13:10.310234    1254 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	I0108 20:14:49.463584   18589 out.go:309] Setting ErrFile to fd 2...
	I0108 20:14:49.463620   18589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 20:14:49.463678   18589 out.go:239] X Problems detected in kubelet:
	W0108 20:14:49.463688   18589 out.go:239]   Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.065927    1254 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-117367" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:49.463716   18589 out.go:239]   Jan 08 20:13:03 addons-117367 kubelet[1254]: W0108 20:13:03.066871    1254 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:49.463724   18589 out.go:239]   Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.066921    1254 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:14:49.463733   18589 out.go:239]   Jan 08 20:13:10 addons-117367 kubelet[1254]: W0108 20:13:10.310198    1254 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	W0108 20:14:49.463746   18589 out.go:239]   Jan 08 20:13:10 addons-117367 kubelet[1254]: E0108 20:13:10.310234    1254 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	I0108 20:14:49.463762   18589 out.go:309] Setting ErrFile to fd 2...
	I0108 20:14:49.463769   18589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:14:49.807860   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:49.850623   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:49.870701   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:50.299850   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:50.350883   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:50.370612   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:50.798128   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:50.850769   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:50.871300   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:51.300004   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:51.351190   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:51.370850   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:51.800890   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:51.851520   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:51.871573   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:52.307084   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:52.351634   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:52.372438   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:52.801121   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:52.854254   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:52.871518   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:53.304251   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:53.352236   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:53.370344   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:53.799344   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:53.851340   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:53.870849   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:54.389961   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:54.395469   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:54.397988   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:54.798594   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:54.851611   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:54.871205   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:55.307671   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:55.353233   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:55.370728   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:55.810140   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:55.851598   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:55.873360   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:56.299226   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:56.351689   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:56.374966   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:56.803082   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:56.852053   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:56.870220   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:57.301515   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:57.351530   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:57.372352   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:57.799625   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:57.850630   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:57.869544   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:58.300812   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:58.351372   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:58.371030   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:58.797924   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:58.852205   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:58.870994   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:59.299034   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:59.352506   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:59.371101   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:59.465854   18589 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0108 20:14:59.472042   18589 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I0108 20:14:59.473504   18589 api_server.go:141] control plane version: v1.28.4
	I0108 20:14:59.473529   18589 api_server.go:131] duration metric: took 12.524387525s to wait for apiserver health ...
	I0108 20:14:59.473537   18589 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:14:59.473556   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 20:14:59.473601   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 20:14:59.549625   18589 cri.go:89] found id: "1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b"
	I0108 20:14:59.549647   18589 cri.go:89] found id: ""
	I0108 20:14:59.549655   18589 logs.go:284] 1 containers: [1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b]
	I0108 20:14:59.549702   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:59.555478   18589 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 20:14:59.555552   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 20:14:59.623979   18589 cri.go:89] found id: "098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c"
	I0108 20:14:59.624006   18589 cri.go:89] found id: ""
	I0108 20:14:59.624017   18589 logs.go:284] 1 containers: [098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c]
	I0108 20:14:59.624067   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:59.630548   18589 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 20:14:59.630622   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 20:14:59.726061   18589 cri.go:89] found id: "61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065"
	I0108 20:14:59.726089   18589 cri.go:89] found id: ""
	I0108 20:14:59.726099   18589 logs.go:284] 1 containers: [61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065]
	I0108 20:14:59.726155   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:59.738358   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 20:14:59.738437   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 20:14:59.798558   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:14:59.831294   18589 cri.go:89] found id: "71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8"
	I0108 20:14:59.831319   18589 cri.go:89] found id: ""
	I0108 20:14:59.831327   18589 logs.go:284] 1 containers: [71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8]
	I0108 20:14:59.831376   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:59.836588   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 20:14:59.836660   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 20:14:59.851109   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:14:59.870474   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:14:59.923364   18589 cri.go:89] found id: "2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a"
	I0108 20:14:59.923403   18589 cri.go:89] found id: ""
	I0108 20:14:59.923417   18589 logs.go:284] 1 containers: [2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a]
	I0108 20:14:59.923477   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:14:59.928399   18589 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 20:14:59.928469   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 20:14:59.996962   18589 cri.go:89] found id: "b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255"
	I0108 20:14:59.996992   18589 cri.go:89] found id: ""
	I0108 20:14:59.997003   18589 logs.go:284] 1 containers: [b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255]
	I0108 20:14:59.997064   18589 ssh_runner.go:195] Run: which crictl
	I0108 20:15:00.002843   18589 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 20:15:00.002904   18589 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 20:15:00.053732   18589 cri.go:89] found id: ""
	I0108 20:15:00.053820   18589 logs.go:284] 0 containers: []
	W0108 20:15:00.053838   18589 logs.go:286] No container was found matching "kindnet"
	I0108 20:15:00.053850   18589 logs.go:123] Gathering logs for container status ...
	I0108 20:15:00.053866   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 20:15:00.147302   18589 logs.go:123] Gathering logs for kubelet ...
	I0108 20:15:00.147335   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 20:15:00.216215   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: W0108 20:13:03.065868    1254 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-117367" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:15:00.216390   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.065927    1254 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-117367" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:15:00.216516   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: W0108 20:13:03.066871    1254 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:15:00.216654   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.066921    1254 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:15:00.231667   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:10 addons-117367 kubelet[1254]: W0108 20:13:10.310198    1254 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	W0108 20:15:00.231902   18589 logs.go:138] Found kubelet problem: Jan 08 20:13:10 addons-117367 kubelet[1254]: E0108 20:13:10.310234    1254 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	I0108 20:15:00.253871   18589 logs.go:123] Gathering logs for dmesg ...
	I0108 20:15:00.253929   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 20:15:00.311946   18589 logs.go:123] Gathering logs for coredns [61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065] ...
	I0108 20:15:00.311977   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065"
	I0108 20:15:00.314394   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:00.363345   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:00.406810   18589 logs.go:123] Gathering logs for kube-scheduler [71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8] ...
	I0108 20:15:00.406849   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8"
	I0108 20:15:00.412432   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:00.498742   18589 logs.go:123] Gathering logs for kube-controller-manager [b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255] ...
	I0108 20:15:00.498780   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255"
	I0108 20:15:00.610436   18589 logs.go:123] Gathering logs for describe nodes ...
	I0108 20:15:00.610480   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 20:15:00.803330   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:00.805856   18589 logs.go:123] Gathering logs for kube-apiserver [1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b] ...
	I0108 20:15:00.805884   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b"
	I0108 20:15:00.850803   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:00.864493   18589 logs.go:123] Gathering logs for etcd [098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c] ...
	I0108 20:15:00.864530   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c"
	I0108 20:15:00.870439   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:00.949402   18589 logs.go:123] Gathering logs for kube-proxy [2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a] ...
	I0108 20:15:00.949438   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a"
	I0108 20:15:01.015863   18589 logs.go:123] Gathering logs for CRI-O ...
	I0108 20:15:01.015893   18589 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 20:15:01.313396   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:01.356886   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:01.371864   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:01.477589   18589 out.go:309] Setting ErrFile to fd 2...
	I0108 20:15:01.477637   18589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 20:15:01.477715   18589 out.go:239] X Problems detected in kubelet:
	W0108 20:15:01.477725   18589 out.go:239]   Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.065927    1254 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-117367" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:15:01.477734   18589 out.go:239]   Jan 08 20:13:03 addons-117367 kubelet[1254]: W0108 20:13:03.066871    1254 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:15:01.477767   18589 out.go:239]   Jan 08 20:13:03 addons-117367 kubelet[1254]: E0108 20:13:03.066921    1254 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-117367' and this object
	W0108 20:15:01.477778   18589 out.go:239]   Jan 08 20:13:10 addons-117367 kubelet[1254]: W0108 20:13:10.310198    1254 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	W0108 20:15:01.477787   18589 out.go:239]   Jan 08 20:13:10 addons-117367 kubelet[1254]: E0108 20:13:10.310234    1254 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-117367" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-117367' and this object
	I0108 20:15:01.477794   18589 out.go:309] Setting ErrFile to fd 2...
	I0108 20:15:01.477803   18589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:15:01.798398   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:01.851640   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:01.870874   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:02.300968   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 20:15:02.356453   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:02.376117   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:02.802858   18589 kapi.go:107] duration metric: took 1m55.010864539s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 20:15:02.850704   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:02.871506   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:03.352433   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:03.371625   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:03.851454   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:03.870643   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:04.351664   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:04.375528   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:04.851478   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:04.871796   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:05.354735   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:05.370026   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:05.850529   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:05.871364   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:06.356281   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:06.371544   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:06.851416   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:06.871032   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:07.350832   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:07.372546   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:07.851516   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:07.871373   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:08.351515   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:08.371267   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:08.851722   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:08.872531   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:09.351529   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:09.373741   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:09.851162   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:09.870821   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:10.351053   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:10.371528   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:10.851165   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:10.871902   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:11.351073   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:11.371601   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:11.487570   18589 system_pods.go:59] 18 kube-system pods found
	I0108 20:15:11.487611   18589 system_pods.go:61] "coredns-5dd5756b68-l64bf" [48f93237-9f80-41e8-808c-0b954f0cb258] Running
	I0108 20:15:11.487616   18589 system_pods.go:61] "csi-hostpath-attacher-0" [6d606313-8fff-4fec-b1df-f8bd17c1856f] Running
	I0108 20:15:11.487620   18589 system_pods.go:61] "csi-hostpath-resizer-0" [662971a8-a51c-48d3-8075-2df14c6c615e] Running
	I0108 20:15:11.487624   18589 system_pods.go:61] "csi-hostpathplugin-ckcrn" [d582c78c-8bde-44a3-8199-7e4f6eb46717] Running
	I0108 20:15:11.487628   18589 system_pods.go:61] "etcd-addons-117367" [f59eab95-6ed4-4b52-be92-5b36466aed6b] Running
	I0108 20:15:11.487632   18589 system_pods.go:61] "kube-apiserver-addons-117367" [6ff91f86-d96c-478b-ba11-ac13c0371ac8] Running
	I0108 20:15:11.487636   18589 system_pods.go:61] "kube-controller-manager-addons-117367" [45cb61c5-3bf6-4efe-8f6e-9ef7e44cab95] Running
	I0108 20:15:11.487640   18589 system_pods.go:61] "kube-ingress-dns-minikube" [fd6398e5-6348-4edb-b263-d0f338f0441b] Running
	I0108 20:15:11.487644   18589 system_pods.go:61] "kube-proxy-x9wjt" [75bffd21-9700-41d8-9ffc-1891f7c19d4a] Running
	I0108 20:15:11.487651   18589 system_pods.go:61] "kube-scheduler-addons-117367" [8a10f233-d827-46e5-99f7-099a3b78bfba] Running
	I0108 20:15:11.487655   18589 system_pods.go:61] "metrics-server-7c66d45ddc-8fbhz" [e9216c24-02bb-430f-9649-eaaf8f8b8782] Running
	I0108 20:15:11.487660   18589 system_pods.go:61] "nvidia-device-plugin-daemonset-4czzg" [a6533da4-4d13-468b-9ddd-3aa8940ce37b] Running
	I0108 20:15:11.487664   18589 system_pods.go:61] "registry-9k4wl" [82d27468-3946-478f-825b-521282fc7a92] Running
	I0108 20:15:11.487668   18589 system_pods.go:61] "registry-proxy-q8br6" [6409afa0-82bf-4dc2-b033-0803a7132987] Running
	I0108 20:15:11.487672   18589 system_pods.go:61] "snapshot-controller-58dbcc7b99-6mwbd" [bceec379-735b-4746-b407-b5ce7ca2737d] Running
	I0108 20:15:11.487676   18589 system_pods.go:61] "snapshot-controller-58dbcc7b99-pdzjs" [eb41863e-516a-497f-8dec-47441aaa36d1] Running
	I0108 20:15:11.487680   18589 system_pods.go:61] "storage-provisioner" [fe147746-c6b7-470c-bed3-c31cc9f36c75] Running
	I0108 20:15:11.487684   18589 system_pods.go:61] "tiller-deploy-7b677967b9-j2j8k" [78b46de8-f390-41b9-ade6-b1ad3f35307f] Running
	I0108 20:15:11.487692   18589 system_pods.go:74] duration metric: took 12.014150558s to wait for pod list to return data ...
	I0108 20:15:11.487702   18589 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:15:11.490439   18589 default_sa.go:45] found service account: "default"
	I0108 20:15:11.490465   18589 default_sa.go:55] duration metric: took 2.756962ms for default service account to be created ...
	I0108 20:15:11.490474   18589 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:15:11.501067   18589 system_pods.go:86] 18 kube-system pods found
	I0108 20:15:11.501096   18589 system_pods.go:89] "coredns-5dd5756b68-l64bf" [48f93237-9f80-41e8-808c-0b954f0cb258] Running
	I0108 20:15:11.501102   18589 system_pods.go:89] "csi-hostpath-attacher-0" [6d606313-8fff-4fec-b1df-f8bd17c1856f] Running
	I0108 20:15:11.501106   18589 system_pods.go:89] "csi-hostpath-resizer-0" [662971a8-a51c-48d3-8075-2df14c6c615e] Running
	I0108 20:15:11.501110   18589 system_pods.go:89] "csi-hostpathplugin-ckcrn" [d582c78c-8bde-44a3-8199-7e4f6eb46717] Running
	I0108 20:15:11.501113   18589 system_pods.go:89] "etcd-addons-117367" [f59eab95-6ed4-4b52-be92-5b36466aed6b] Running
	I0108 20:15:11.501117   18589 system_pods.go:89] "kube-apiserver-addons-117367" [6ff91f86-d96c-478b-ba11-ac13c0371ac8] Running
	I0108 20:15:11.501122   18589 system_pods.go:89] "kube-controller-manager-addons-117367" [45cb61c5-3bf6-4efe-8f6e-9ef7e44cab95] Running
	I0108 20:15:11.501126   18589 system_pods.go:89] "kube-ingress-dns-minikube" [fd6398e5-6348-4edb-b263-d0f338f0441b] Running
	I0108 20:15:11.501130   18589 system_pods.go:89] "kube-proxy-x9wjt" [75bffd21-9700-41d8-9ffc-1891f7c19d4a] Running
	I0108 20:15:11.501134   18589 system_pods.go:89] "kube-scheduler-addons-117367" [8a10f233-d827-46e5-99f7-099a3b78bfba] Running
	I0108 20:15:11.501138   18589 system_pods.go:89] "metrics-server-7c66d45ddc-8fbhz" [e9216c24-02bb-430f-9649-eaaf8f8b8782] Running
	I0108 20:15:11.501142   18589 system_pods.go:89] "nvidia-device-plugin-daemonset-4czzg" [a6533da4-4d13-468b-9ddd-3aa8940ce37b] Running
	I0108 20:15:11.501149   18589 system_pods.go:89] "registry-9k4wl" [82d27468-3946-478f-825b-521282fc7a92] Running
	I0108 20:15:11.501153   18589 system_pods.go:89] "registry-proxy-q8br6" [6409afa0-82bf-4dc2-b033-0803a7132987] Running
	I0108 20:15:11.501159   18589 system_pods.go:89] "snapshot-controller-58dbcc7b99-6mwbd" [bceec379-735b-4746-b407-b5ce7ca2737d] Running
	I0108 20:15:11.501163   18589 system_pods.go:89] "snapshot-controller-58dbcc7b99-pdzjs" [eb41863e-516a-497f-8dec-47441aaa36d1] Running
	I0108 20:15:11.501169   18589 system_pods.go:89] "storage-provisioner" [fe147746-c6b7-470c-bed3-c31cc9f36c75] Running
	I0108 20:15:11.501173   18589 system_pods.go:89] "tiller-deploy-7b677967b9-j2j8k" [78b46de8-f390-41b9-ade6-b1ad3f35307f] Running
	I0108 20:15:11.501183   18589 system_pods.go:126] duration metric: took 10.703822ms to wait for k8s-apps to be running ...
	I0108 20:15:11.501192   18589 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:15:11.501236   18589 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:15:11.520821   18589 system_svc.go:56] duration metric: took 19.622332ms WaitForService to wait for kubelet.
	I0108 20:15:11.520855   18589 kubeadm.go:581] duration metric: took 2m14.559774061s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:15:11.520881   18589 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:15:11.524433   18589 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:15:11.524468   18589 node_conditions.go:123] node cpu capacity is 2
	I0108 20:15:11.524484   18589 node_conditions.go:105] duration metric: took 3.597465ms to run NodePressure ...
	I0108 20:15:11.524498   18589 start.go:228] waiting for startup goroutines ...
	I0108 20:15:11.851319   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:11.870907   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:12.351141   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:12.370984   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:12.851933   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:12.871970   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:13.351249   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:13.371744   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:13.852080   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:13.871064   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:14.352153   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:14.371275   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:14.851247   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:14.870728   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:15.352393   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:15.371157   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:15.850813   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:15.870249   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:16.351822   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:16.371563   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:16.852001   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:16.871853   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:17.350949   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:17.373300   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:17.851351   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:17.871608   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:18.351860   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:18.370121   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:18.853210   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:18.873155   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:19.351163   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:19.373012   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:19.853329   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:19.870951   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:20.350867   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:20.370548   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:20.851230   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:20.870741   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:21.351463   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:21.372952   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:21.851207   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:21.871760   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:22.351553   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:22.371436   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:22.851305   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:22.871229   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:23.353295   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:23.371727   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:23.850360   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:23.871602   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:24.351352   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:24.370848   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:24.851369   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:24.877924   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:25.352654   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:25.371216   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:25.908949   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:25.911886   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:26.351947   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:26.370538   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:26.853005   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:26.871816   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:27.350776   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:27.369772   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:27.853804   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:27.871419   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:28.350739   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:28.370584   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:28.853340   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:28.872147   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:29.351075   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:29.371129   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:29.853767   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:29.870835   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:30.353974   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:30.371731   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:30.853778   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:30.870426   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:31.357872   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:31.370983   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:31.851333   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:31.872472   18589 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 20:15:32.351360   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:32.370976   18589 kapi.go:107] duration metric: took 2m25.509442433s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 20:15:32.851173   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:33.351624   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:33.852382   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:34.350876   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:34.851638   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:35.350880   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:35.853501   18589 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 20:15:36.351577   18589 kapi.go:107] duration metric: took 2m26.004975217s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 20:15:36.353665   18589 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-117367 cluster.
	I0108 20:15:36.355231   18589 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 20:15:36.356535   18589 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 20:15:36.358082   18589 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0108 20:15:36.360867   18589 addons.go:508] enable addons completed in 2m40.006729227s: enabled=[cloud-spanner default-storageclass storage-provisioner ingress-dns nvidia-device-plugin storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0108 20:15:36.360911   18589 start.go:233] waiting for cluster config update ...
	I0108 20:15:36.360928   18589 start.go:242] writing updated cluster config ...
	I0108 20:15:36.361158   18589 ssh_runner.go:195] Run: rm -f paused
	I0108 20:15:36.416711   18589 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 20:15:36.418740   18589 out.go:177] * Done! kubectl is now configured to use "addons-117367" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 20:12:11 UTC, ends at Mon 2024-01-08 20:19:08 UTC. --
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.339495420Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=84820295-2f97-41a4-b027-5ac49560f49a name=/runtime.v1.RuntimeService/Version
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.341598244Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bc57e4c9-b1dc-481e-9d99-6860b57335a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.342920983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704745148342901281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=bc57e4c9-b1dc-481e-9d99-6860b57335a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.344323030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a1f71d60-17b6-4449-a641-b846d87dd978 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.344379902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a1f71d60-17b6-4449-a641-b846d87dd978 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.344711321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8dd34fa477deefb1b752c774f415f33af83069085355bf10a7b7241c9ca52ef,PodSandboxId:7a1d8140cd965c4cedc0f48131518ed8f265e787d8cd82a783fbf7a7f1761f09,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704745141241966023,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-77tjj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fbc3b2c9-defb-4a9a-860d-c6c897d03e9f,},Annotations:map[string]string{io.kubernetes.container.hash: 32f7a7c1,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9853d6fb7fc28e921ecb65e4e454dee00eb724eb28c25810e4eaedd82943d8b7,PodSandboxId:4cf83a951b0418929f7949bcb0ee70174b4f82ecd53a6bafbbfe1f534248da14,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704745001497396383,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05f824c9-ec7d-412c-a674-1f893cffb657,},Annotations:map[string]string{io.kubernet
es.container.hash: 2e60e2fe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bad9d717e6981511e2d81d8d91d921fcb9dfcc1e711168ec5e960dc93d82357,PodSandboxId:8b21a34ebe3bd6490dfc08fa8059ab024ea715c8e8f038de5a1414ff360519df,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704744986819409034,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-8zh4p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: fc6d0204-105e-4658-9084-c38094972eb7,},Annotations:map[string]string{io.kubernetes.container.hash: 287a450,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a384b0623f17c3f3a0e4132f5033cfdaa45bbe46f8a92d90fbdb3ad4417945,PodSandboxId:252e2d72f9e499640b77a230edff89e80f5af9e7d966d317d357d79bc2dd258b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704744934999377958,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-gtl49,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 217221b9-0874-4c10-8923-a7b9dc3eeb51,},Annotations:map[string]string{io.kubernetes.container.hash: f227c777,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3447437a3e348a251b32e77f403b19363150fff1b41bc8388f5f630bf61a08e3,PodSandboxId:73047a6945b5bfdcd572c46cc6e79b4d228b36c66882fe230878d90f9194eb7f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704744880848562785,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2z7tz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0f586cb5-986d-4cfd-9a98-c4b99fe2219b,},Annotations:map[string]string{io.kubernetes.container.hash: d021e270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e75d8aba40ac55351911e21df383207ee59b7126e6c4ffa6085dc3d1584d5e5,PodSandboxId:5d86aa866771b417ba2a3bd260c7c265845056fd776337d8cebf8e699697a7a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704744863008022704,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g8jfv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b5f1d968-0cf2-4e09-9a6d-802c60e50e8a,},Annotations:map[string]string{io.kubernetes.container.hash: eb30a40f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6543c3f6db3d3b8d5d82122c24d9d732414b638b831188a803297f94464a0fc,PodSandboxId:aaea22c2dc85caf37062ce37b9235649e31d2d4c876d6772754af64afb19285c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704744799369272895,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe147746-c6b7-470c-bed3-c31cc9f36c75,},Annotations:map[string]string{io.kubernetes.container.hash: ca917ac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed0411b7cbfc0e3bc0e2aef44ea85952c1dd60744d10aa3d30f65be3416631f,PodSandboxId:5b8d1ea2be3b0326d6cf29e0c031fe86a8503e460939989de6ffe9fa8b0694a4,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/
yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704744799497594718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-f94t4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2775147b-f7b1-4b1f-9010-63889a274022,},Annotations:map[string]string{io.kubernetes.container.hash: 33e1a1ae,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a,PodSandboxId:376795d30b6e045e53e78e9d4eabbe3b588967f26d13cc0c84e84259368d2f81,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899
304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704744792376739171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9wjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75bffd21-9700-41d8-9ffc-1891f7c19d4a,},Annotations:map[string]string{io.kubernetes.container.hash: 8190118e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065,PodSandboxId:ad148b878517944bf57765a7554cff6cd475d0540e0e22b72cfe3fa2419dd68b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{
},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704744779853278239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l64bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f93237-9f80-41e8-808c-0b954f0cb258,},Annotations:map[string]string{io.kubernetes.container.hash: 18266dde,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8,PodSandboxId:4924fb111a1731bf463de247d4cf9b7c5322aa
a2d8a75c7eed7254cd5f3e39b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704744756158482148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b37686a5b29edda671049b71a8a8d618,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c,PodSandboxId:43bf105a61380e7b4621058b726b3ba25f40b2ae29c8a3384360a
bd12a9965b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704744756069755662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88846a62759ab897881466aa392d8dfc,},Annotations:map[string]string{io.kubernetes.container.hash: 409776a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255,PodSandboxId:e658b3b367f84c158197eaa6ceac46f3d142160c3de80a4afbc507796ccba31c,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704744755979564225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c16f3bd1dda3272291846e70863b7a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b,PodSandboxId:6003fe39d6446e042acde8aee3a5a5b729ecf7bb98737236f4fc9e41d04108e2,Metadata:&ContainerMet
adata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704744755762854466,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddbe848093583c8acb8c36ca63529931,},Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a1f71d60-17b6-4449-a641-b846d87dd978 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.385057277Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=24bf0c58-d1e5-4ca5-87e1-fc80b5332df6 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.385216795Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=24bf0c58-d1e5-4ca5-87e1-fc80b5332df6 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.387075675Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b674ec45-1224-4d24-b198-27c63b93cf90 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.388516400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704745148388493717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=b674ec45-1224-4d24-b198-27c63b93cf90 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.389186392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=853bc8dd-657b-4040-9791-0752097099f5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.389243471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=853bc8dd-657b-4040-9791-0752097099f5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.389557470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8dd34fa477deefb1b752c774f415f33af83069085355bf10a7b7241c9ca52ef,PodSandboxId:7a1d8140cd965c4cedc0f48131518ed8f265e787d8cd82a783fbf7a7f1761f09,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704745141241966023,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-77tjj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fbc3b2c9-defb-4a9a-860d-c6c897d03e9f,},Annotations:map[string]string{io.kubernetes.container.hash: 32f7a7c1,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9853d6fb7fc28e921ecb65e4e454dee00eb724eb28c25810e4eaedd82943d8b7,PodSandboxId:4cf83a951b0418929f7949bcb0ee70174b4f82ecd53a6bafbbfe1f534248da14,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704745001497396383,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05f824c9-ec7d-412c-a674-1f893cffb657,},Annotations:map[string]string{io.kubernet
es.container.hash: 2e60e2fe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bad9d717e6981511e2d81d8d91d921fcb9dfcc1e711168ec5e960dc93d82357,PodSandboxId:8b21a34ebe3bd6490dfc08fa8059ab024ea715c8e8f038de5a1414ff360519df,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704744986819409034,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-8zh4p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: fc6d0204-105e-4658-9084-c38094972eb7,},Annotations:map[string]string{io.kubernetes.container.hash: 287a450,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a384b0623f17c3f3a0e4132f5033cfdaa45bbe46f8a92d90fbdb3ad4417945,PodSandboxId:252e2d72f9e499640b77a230edff89e80f5af9e7d966d317d357d79bc2dd258b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704744934999377958,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-gtl49,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 217221b9-0874-4c10-8923-a7b9dc3eeb51,},Annotations:map[string]string{io.kubernetes.container.hash: f227c777,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3447437a3e348a251b32e77f403b19363150fff1b41bc8388f5f630bf61a08e3,PodSandboxId:73047a6945b5bfdcd572c46cc6e79b4d228b36c66882fe230878d90f9194eb7f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704744880848562785,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2z7tz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0f586cb5-986d-4cfd-9a98-c4b99fe2219b,},Annotations:map[string]string{io.kubernetes.container.hash: d021e270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e75d8aba40ac55351911e21df383207ee59b7126e6c4ffa6085dc3d1584d5e5,PodSandboxId:5d86aa866771b417ba2a3bd260c7c265845056fd776337d8cebf8e699697a7a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704744863008022704,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g8jfv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b5f1d968-0cf2-4e09-9a6d-802c60e50e8a,},Annotations:map[string]string{io.kubernetes.container.hash: eb30a40f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6543c3f6db3d3b8d5d82122c24d9d732414b638b831188a803297f94464a0fc,PodSandboxId:aaea22c2dc85caf37062ce37b9235649e31d2d4c876d6772754af64afb19285c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704744799369272895,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe147746-c6b7-470c-bed3-c31cc9f36c75,},Annotations:map[string]string{io.kubernetes.container.hash: ca917ac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed0411b7cbfc0e3bc0e2aef44ea85952c1dd60744d10aa3d30f65be3416631f,PodSandboxId:5b8d1ea2be3b0326d6cf29e0c031fe86a8503e460939989de6ffe9fa8b0694a4,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/
yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704744799497594718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-f94t4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2775147b-f7b1-4b1f-9010-63889a274022,},Annotations:map[string]string{io.kubernetes.container.hash: 33e1a1ae,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a,PodSandboxId:376795d30b6e045e53e78e9d4eabbe3b588967f26d13cc0c84e84259368d2f81,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899
304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704744792376739171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9wjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75bffd21-9700-41d8-9ffc-1891f7c19d4a,},Annotations:map[string]string{io.kubernetes.container.hash: 8190118e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065,PodSandboxId:ad148b878517944bf57765a7554cff6cd475d0540e0e22b72cfe3fa2419dd68b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{
},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704744779853278239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l64bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f93237-9f80-41e8-808c-0b954f0cb258,},Annotations:map[string]string{io.kubernetes.container.hash: 18266dde,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8,PodSandboxId:4924fb111a1731bf463de247d4cf9b7c5322aa
a2d8a75c7eed7254cd5f3e39b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704744756158482148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b37686a5b29edda671049b71a8a8d618,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c,PodSandboxId:43bf105a61380e7b4621058b726b3ba25f40b2ae29c8a3384360a
bd12a9965b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704744756069755662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88846a62759ab897881466aa392d8dfc,},Annotations:map[string]string{io.kubernetes.container.hash: 409776a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255,PodSandboxId:e658b3b367f84c158197eaa6ceac46f3d142160c3de80a4afbc507796ccba31c,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704744755979564225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c16f3bd1dda3272291846e70863b7a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b,PodSandboxId:6003fe39d6446e042acde8aee3a5a5b729ecf7bb98737236f4fc9e41d04108e2,Metadata:&ContainerMet
adata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704744755762854466,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddbe848093583c8acb8c36ca63529931,},Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=853bc8dd-657b-4040-9791-0752097099f5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.439082191Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=547a1db5-108b-4f31-a94b-f34dfe62d504 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.439239012Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=547a1db5-108b-4f31-a94b-f34dfe62d504 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.441584848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a389de6a-7c9a-4c41-955b-6029b4e01d50 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.443656820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704745148443627551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575388,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=a389de6a-7c9a-4c41-955b-6029b4e01d50 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.444582213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=574e4b18-d515-4669-bcf6-ad647e5d7986 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.444687787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=574e4b18-d515-4669-bcf6-ad647e5d7986 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.445027413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8dd34fa477deefb1b752c774f415f33af83069085355bf10a7b7241c9ca52ef,PodSandboxId:7a1d8140cd965c4cedc0f48131518ed8f265e787d8cd82a783fbf7a7f1761f09,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704745141241966023,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-77tjj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fbc3b2c9-defb-4a9a-860d-c6c897d03e9f,},Annotations:map[string]string{io.kubernetes.container.hash: 32f7a7c1,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9853d6fb7fc28e921ecb65e4e454dee00eb724eb28c25810e4eaedd82943d8b7,PodSandboxId:4cf83a951b0418929f7949bcb0ee70174b4f82ecd53a6bafbbfe1f534248da14,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704745001497396383,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05f824c9-ec7d-412c-a674-1f893cffb657,},Annotations:map[string]string{io.kubernet
es.container.hash: 2e60e2fe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bad9d717e6981511e2d81d8d91d921fcb9dfcc1e711168ec5e960dc93d82357,PodSandboxId:8b21a34ebe3bd6490dfc08fa8059ab024ea715c8e8f038de5a1414ff360519df,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704744986819409034,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-8zh4p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: fc6d0204-105e-4658-9084-c38094972eb7,},Annotations:map[string]string{io.kubernetes.container.hash: 287a450,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a384b0623f17c3f3a0e4132f5033cfdaa45bbe46f8a92d90fbdb3ad4417945,PodSandboxId:252e2d72f9e499640b77a230edff89e80f5af9e7d966d317d357d79bc2dd258b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704744934999377958,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-gtl49,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 217221b9-0874-4c10-8923-a7b9dc3eeb51,},Annotations:map[string]string{io.kubernetes.container.hash: f227c777,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3447437a3e348a251b32e77f403b19363150fff1b41bc8388f5f630bf61a08e3,PodSandboxId:73047a6945b5bfdcd572c46cc6e79b4d228b36c66882fe230878d90f9194eb7f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704744880848562785,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2z7tz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0f586cb5-986d-4cfd-9a98-c4b99fe2219b,},Annotations:map[string]string{io.kubernetes.container.hash: d021e270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e75d8aba40ac55351911e21df383207ee59b7126e6c4ffa6085dc3d1584d5e5,PodSandboxId:5d86aa866771b417ba2a3bd260c7c265845056fd776337d8cebf8e699697a7a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704744863008022704,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g8jfv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b5f1d968-0cf2-4e09-9a6d-802c60e50e8a,},Annotations:map[string]string{io.kubernetes.container.hash: eb30a40f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6543c3f6db3d3b8d5d82122c24d9d732414b638b831188a803297f94464a0fc,PodSandboxId:aaea22c2dc85caf37062ce37b9235649e31d2d4c876d6772754af64afb19285c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704744799369272895,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe147746-c6b7-470c-bed3-c31cc9f36c75,},Annotations:map[string]string{io.kubernetes.container.hash: ca917ac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed0411b7cbfc0e3bc0e2aef44ea85952c1dd60744d10aa3d30f65be3416631f,PodSandboxId:5b8d1ea2be3b0326d6cf29e0c031fe86a8503e460939989de6ffe9fa8b0694a4,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/
yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704744799497594718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-f94t4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2775147b-f7b1-4b1f-9010-63889a274022,},Annotations:map[string]string{io.kubernetes.container.hash: 33e1a1ae,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a,PodSandboxId:376795d30b6e045e53e78e9d4eabbe3b588967f26d13cc0c84e84259368d2f81,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899
304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704744792376739171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9wjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75bffd21-9700-41d8-9ffc-1891f7c19d4a,},Annotations:map[string]string{io.kubernetes.container.hash: 8190118e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065,PodSandboxId:ad148b878517944bf57765a7554cff6cd475d0540e0e22b72cfe3fa2419dd68b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{
},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704744779853278239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l64bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f93237-9f80-41e8-808c-0b954f0cb258,},Annotations:map[string]string{io.kubernetes.container.hash: 18266dde,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8,PodSandboxId:4924fb111a1731bf463de247d4cf9b7c5322aa
a2d8a75c7eed7254cd5f3e39b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704744756158482148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b37686a5b29edda671049b71a8a8d618,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c,PodSandboxId:43bf105a61380e7b4621058b726b3ba25f40b2ae29c8a3384360a
bd12a9965b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704744756069755662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88846a62759ab897881466aa392d8dfc,},Annotations:map[string]string{io.kubernetes.container.hash: 409776a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255,PodSandboxId:e658b3b367f84c158197eaa6ceac46f3d142160c3de80a4afbc507796ccba31c,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704744755979564225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c16f3bd1dda3272291846e70863b7a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b,PodSandboxId:6003fe39d6446e042acde8aee3a5a5b729ecf7bb98737236f4fc9e41d04108e2,Metadata:&ContainerMet
adata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704744755762854466,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddbe848093583c8acb8c36ca63529931,},Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=574e4b18-d515-4669-bcf6-ad647e5d7986 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.469790999Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=f1c0c8f3-3d68-4f71-a0cb-4e63f1c288b9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.470340088Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7a1d8140cd965c4cedc0f48131518ed8f265e787d8cd82a783fbf7a7f1761f09,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-77tjj,Uid:fbc3b2c9-defb-4a9a-860d-c6c897d03e9f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704745138135380256,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-77tjj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fbc3b2c9-defb-4a9a-860d-c6c897d03e9f,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T20:18:57.796176646Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4cf83a951b0418929f7949bcb0ee70174b4f82ecd53a6bafbbfe1f534248da14,Metadata:&PodSandboxMetadata{Name:nginx,Uid:05f824c9-ec7d-412c-a674-1f893cffb657,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1704744994488281512,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05f824c9-ec7d-412c-a674-1f893cffb657,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T20:16:34.153871794Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8b21a34ebe3bd6490dfc08fa8059ab024ea715c8e8f038de5a1414ff360519df,Metadata:&PodSandboxMetadata{Name:headlamp-7ddfbb94ff-8zh4p,Uid:fc6d0204-105e-4658-9084-c38094972eb7,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704744961104764642,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7ddfbb94ff-8zh4p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: fc6d0204-105e-4658-9084-c38094972eb7,pod-template-hash: 7ddfbb94ff,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
01-08T20:15:57.693834445Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:252e2d72f9e499640b77a230edff89e80f5af9e7d966d317d357d79bc2dd258b,Metadata:&PodSandboxMetadata{Name:gcp-auth-d4c87556c-gtl49,Uid:217221b9-0874-4c10-8923-a7b9dc3eeb51,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704744929623778249,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-d4c87556c-gtl49,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 217221b9-0874-4c10-8923-a7b9dc3eeb51,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: d4c87556c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T20:13:10.284504046Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d6a88d38ee91682d32b2bbcab481d4d7a3754cb4fec831e3b0c3938b2174e0ae,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-69cff4fd79-hmf5x,Uid:247441d9-73f2-480a-8ee9-697095a4d289,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTRE
ADY,CreatedAt:1704744922996694216,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-69cff4fd79-hmf5x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 247441d9-73f2-480a-8ee9-697095a4d289,pod-template-hash: 69cff4fd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T20:13:06.673249190Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d86aa866771b417ba2a3bd260c7c265845056fd776337d8cebf8e699697a7a5,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-g8jfv,Uid:b5f1d968-0cf2-4e09-9a6d-802c60e50e8a,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1704744787184217260,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kub
ernetes.io/controller-uid: 3d3a3dd8-f2ce-4c34-9a86-6429ed5c64bc,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 3d3a3dd8-f2ce-4c34-9a86-6429ed5c64bc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-g8jfv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b5f1d968-0cf2-4e09-9a6d-802c60e50e8a,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T20:13:06.821359974Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:73047a6945b5bfdcd572c46cc6e79b4d228b36c66882fe230878d90f9194eb7f,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-2z7tz,Uid:0f586cb5-986d-4cfd-9a98-c4b99fe2219b,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1704744787151958762,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-u
id: da978ebf-56c4-4317-a376-9cd585c961be,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: da978ebf-56c4-4317-a376-9cd585c961be,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-2z7tz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0f586cb5-986d-4cfd-9a98-c4b99fe2219b,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T20:13:06.807674812Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b8d1ea2be3b0326d6cf29e0c031fe86a8503e460939989de6ffe9fa8b0694a4,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-9947fc6bf-f94t4,Uid:2775147b-f7b1-4b1f-9010-63889a274022,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704744785830534280,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-
f94t4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2775147b-f7b1-4b1f-9010-63889a274022,pod-template-hash: 9947fc6bf,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T20:13:05.484379328Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aaea22c2dc85caf37062ce37b9235649e31d2d4c876d6772754af64afb19285c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:fe147746-c6b7-470c-bed3-c31cc9f36c75,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704744785690664262,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe147746-c6b7-470c-bed3-c31cc9f36c75,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mo
de\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-08T20:13:05.315289508Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b7cc328a84836c571a7d8ffef325a83a9b158931ffc807cf384a28ed72f4f370,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:fd6398e5-6348-4edb-b263-d0f338f0441b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1704744784995754758,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd6398e5-6348-4edb-b263-d0f338f0441b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-01
-08T20:13:04.353742211Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ad148b878517944bf57765a7554cff6cd475d0540e0e22b72cfe3fa2419dd68b,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-l64bf,Uid:48f93237-9f80-41e8-808c-0b954f0cb258,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704744776863427949,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-l64bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f93237-9f80-41e8-808c-0b954f0cb258,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T20:12:56.232425977Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:376795d30b6e045e53e78e9d4eabbe3b588967f26d13cc0c84e84259368d2f81,Metadata:&PodSandboxMetadata{Name:kube-proxy-x9wjt,Uid:75bffd21-9700-41d8-9ffc-1891f7c19d4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704744776500421016,Labels:map[string]string{co
ntroller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-x9wjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75bffd21-9700-41d8-9ffc-1891f7c19d4a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T20:12:56.117276917Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4924fb111a1731bf463de247d4cf9b7c5322aaa2d8a75c7eed7254cd5f3e39b1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-117367,Uid:b37686a5b29edda671049b71a8a8d618,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704744755248405261,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b37686a5b29edda671049b71a8a8d618,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b37686a5b29edda671049b71a8a8d618,kubernetes.i
o/config.seen: 2024-01-08T20:12:34.698921888Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e658b3b367f84c158197eaa6ceac46f3d142160c3de80a4afbc507796ccba31c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-117367,Uid:b2c16f3bd1dda3272291846e70863b7a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704744755241050239,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c16f3bd1dda3272291846e70863b7a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b2c16f3bd1dda3272291846e70863b7a,kubernetes.io/config.seen: 2024-01-08T20:12:34.698920294Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6003fe39d6446e042acde8aee3a5a5b729ecf7bb98737236f4fc9e41d04108e2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-117367,Uid:ddbe848093583c8acb8c36ca635
29931,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704744755235599272,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddbe848093583c8acb8c36ca63529931,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.205:8443,kubernetes.io/config.hash: ddbe848093583c8acb8c36ca63529931,kubernetes.io/config.seen: 2024-01-08T20:12:34.698918573Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:43bf105a61380e7b4621058b726b3ba25f40b2ae29c8a3384360abd12a9965b7,Metadata:&PodSandboxMetadata{Name:etcd-addons-117367,Uid:88846a62759ab897881466aa392d8dfc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704744755171002149,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-117367,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 88846a62759ab897881466aa392d8dfc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.205:2379,kubernetes.io/config.hash: 88846a62759ab897881466aa392d8dfc,kubernetes.io/config.seen: 2024-01-08T20:12:34.698912988Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=f1c0c8f3-3d68-4f71-a0cb-4e63f1c288b9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.471483391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fd82862c-ab23-4e13-bbe7-84e704e16789 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.471585022Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fd82862c-ab23-4e13-bbe7-84e704e16789 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:19:08 addons-117367 crio[715]: time="2024-01-08 20:19:08.472820566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c8dd34fa477deefb1b752c774f415f33af83069085355bf10a7b7241c9ca52ef,PodSandboxId:7a1d8140cd965c4cedc0f48131518ed8f265e787d8cd82a783fbf7a7f1761f09,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704745141241966023,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-77tjj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fbc3b2c9-defb-4a9a-860d-c6c897d03e9f,},Annotations:map[string]string{io.kubernetes.container.hash: 32f7a7c1,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9853d6fb7fc28e921ecb65e4e454dee00eb724eb28c25810e4eaedd82943d8b7,PodSandboxId:4cf83a951b0418929f7949bcb0ee70174b4f82ecd53a6bafbbfe1f534248da14,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704745001497396383,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 05f824c9-ec7d-412c-a674-1f893cffb657,},Annotations:map[string]string{io.kubernet
es.container.hash: 2e60e2fe,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bad9d717e6981511e2d81d8d91d921fcb9dfcc1e711168ec5e960dc93d82357,PodSandboxId:8b21a34ebe3bd6490dfc08fa8059ab024ea715c8e8f038de5a1414ff360519df,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1704744986819409034,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-8zh4p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: fc6d0204-105e-4658-9084-c38094972eb7,},Annotations:map[string]string{io.kubernetes.container.hash: 287a450,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a384b0623f17c3f3a0e4132f5033cfdaa45bbe46f8a92d90fbdb3ad4417945,PodSandboxId:252e2d72f9e499640b77a230edff89e80f5af9e7d966d317d357d79bc2dd258b,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1704744934999377958,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-gtl49,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 217221b9-0874-4c10-8923-a7b9dc3eeb51,},Annotations:map[string]string{io.kubernetes.container.hash: f227c777,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3447437a3e348a251b32e77f403b19363150fff1b41bc8388f5f630bf61a08e3,PodSandboxId:73047a6945b5bfdcd572c46cc6e79b4d228b36c66882fe230878d90f9194eb7f,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c596
5b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704744880848562785,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2z7tz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0f586cb5-986d-4cfd-9a98-c4b99fe2219b,},Annotations:map[string]string{io.kubernetes.container.hash: d021e270,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e75d8aba40ac55351911e21df383207ee59b7126e6c4ffa6085dc3d1584d5e5,PodSandboxId:5d86aa866771b417ba2a3bd260c7c265845056fd776337d8cebf8e699697a7a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certge
n@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1704744863008022704,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-g8jfv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b5f1d968-0cf2-4e09-9a6d-802c60e50e8a,},Annotations:map[string]string{io.kubernetes.container.hash: eb30a40f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6543c3f6db3d3b8d5d82122c24d9d732414b638b831188a803297f94464a0fc,PodSandboxId:aaea22c2dc85caf37062ce37b9235649e31d2d4c876d6772754af64afb19285c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provis
ioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704744799369272895,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe147746-c6b7-470c-bed3-c31cc9f36c75,},Annotations:map[string]string{io.kubernetes.container.hash: ca917ac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed0411b7cbfc0e3bc0e2aef44ea85952c1dd60744d10aa3d30f65be3416631f,PodSandboxId:5b8d1ea2be3b0326d6cf29e0c031fe86a8503e460939989de6ffe9fa8b0694a4,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/
yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1704744799497594718,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-f94t4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2775147b-f7b1-4b1f-9010-63889a274022,},Annotations:map[string]string{io.kubernetes.container.hash: 33e1a1ae,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a,PodSandboxId:376795d30b6e045e53e78e9d4eabbe3b588967f26d13cc0c84e84259368d2f81,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899
304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704744792376739171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x9wjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75bffd21-9700-41d8-9ffc-1891f7c19d4a,},Annotations:map[string]string{io.kubernetes.container.hash: 8190118e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065,PodSandboxId:ad148b878517944bf57765a7554cff6cd475d0540e0e22b72cfe3fa2419dd68b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{
},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704744779853278239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l64bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48f93237-9f80-41e8-808c-0b954f0cb258,},Annotations:map[string]string{io.kubernetes.container.hash: 18266dde,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8,PodSandboxId:4924fb111a1731bf463de247d4cf9b7c5322aa
a2d8a75c7eed7254cd5f3e39b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704744756158482148,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b37686a5b29edda671049b71a8a8d618,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c,PodSandboxId:43bf105a61380e7b4621058b726b3ba25f40b2ae29c8a3384360a
bd12a9965b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704744756069755662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88846a62759ab897881466aa392d8dfc,},Annotations:map[string]string{io.kubernetes.container.hash: 409776a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255,PodSandboxId:e658b3b367f84c158197eaa6ceac46f3d142160c3de80a4afbc507796ccba31c,Metadata:&ContainerMetadata{Name:kube-contr
oller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704744755979564225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2c16f3bd1dda3272291846e70863b7a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b,PodSandboxId:6003fe39d6446e042acde8aee3a5a5b729ecf7bb98737236f4fc9e41d04108e2,Metadata:&ContainerMet
adata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704744755762854466,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-117367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddbe848093583c8acb8c36ca63529931,},Annotations:map[string]string{io.kubernetes.container.hash: a92a427e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fd82862c-ab23-4e13-bbe7-84e704e16789 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c8dd34fa477de       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   7a1d8140cd965       hello-world-app-5d77478584-77tjj
	9853d6fb7fc28       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                              2 minutes ago       Running             nginx                     0                   4cf83a951b041       nginx
	1bad9d717e698       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   8b21a34ebe3bd       headlamp-7ddfbb94ff-8zh4p
	18a384b0623f1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   252e2d72f9e49       gcp-auth-d4c87556c-gtl49
	3447437a3e348       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              patch                     0                   73047a6945b5b       ingress-nginx-admission-patch-2z7tz
	5e75d8aba40ac       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              create                    0                   5d86aa866771b       ingress-nginx-admission-create-g8jfv
	8ed0411b7cbfc       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              5 minutes ago       Running             yakd                      0                   5b8d1ea2be3b0       yakd-dashboard-9947fc6bf-f94t4
	e6543c3f6db3d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   aaea22c2dc85c       storage-provisioner
	2e0ed0766771f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             5 minutes ago       Running             kube-proxy                0                   376795d30b6e0       kube-proxy-x9wjt
	61786fa344dd4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             6 minutes ago       Running             coredns                   0                   ad148b8785179       coredns-5dd5756b68-l64bf
	71ad9f9df59f7       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             6 minutes ago       Running             kube-scheduler            0                   4924fb111a173       kube-scheduler-addons-117367
	098e44261ab40       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             6 minutes ago       Running             etcd                      0                   43bf105a61380       etcd-addons-117367
	b97af58a89d77       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             6 minutes ago       Running             kube-controller-manager   0                   e658b3b367f84       kube-controller-manager-addons-117367
	1e5c2b45dba1f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             6 minutes ago       Running             kube-apiserver            0                   6003fe39d6446       kube-apiserver-addons-117367
	
	
	==> coredns [61786fa344dd4da6d24c7bd155b772ac5113b9e3904099720eaad5aadda1a065] <==
	[INFO] 10.244.0.9:33867 - 65313 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000298827s
	[INFO] 10.244.0.9:58786 - 58338 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097303s
	[INFO] 10.244.0.9:58786 - 42464 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074285s
	[INFO] 10.244.0.9:46127 - 21141 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000076981s
	[INFO] 10.244.0.9:46127 - 8088 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000076387s
	[INFO] 10.244.0.9:39777 - 31094 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000078063s
	[INFO] 10.244.0.9:39777 - 11380 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000076663s
	[INFO] 10.244.0.9:56659 - 8459 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000060817s
	[INFO] 10.244.0.9:56659 - 23569 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000051532s
	[INFO] 10.244.0.9:35606 - 58346 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035886s
	[INFO] 10.244.0.9:35606 - 8167 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000033192s
	[INFO] 10.244.0.9:40528 - 56776 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043273s
	[INFO] 10.244.0.9:40528 - 44577 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055646s
	[INFO] 10.244.0.9:34539 - 21585 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000031306s
	[INFO] 10.244.0.9:34539 - 49999 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000029551s
	[INFO] 10.244.0.22:57757 - 6901 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00046878s
	[INFO] 10.244.0.22:48280 - 31104 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000101704s
	[INFO] 10.244.0.22:43334 - 51817 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000206683s
	[INFO] 10.244.0.22:45411 - 57760 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000209872s
	[INFO] 10.244.0.22:55056 - 11089 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000178068s
	[INFO] 10.244.0.22:57315 - 60748 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000163867s
	[INFO] 10.244.0.22:44365 - 26484 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000843893s
	[INFO] 10.244.0.22:35822 - 59771 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000617667s
	[INFO] 10.244.0.25:47575 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000823638s
	[INFO] 10.244.0.25:56762 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000184633s
	
	
	==> describe nodes <==
	Name:               addons-117367
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-117367
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=addons-117367
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_12_43_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-117367
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:12:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-117367
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:19:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:16:49 +0000   Mon, 08 Jan 2024 20:12:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:16:49 +0000   Mon, 08 Jan 2024 20:12:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:16:49 +0000   Mon, 08 Jan 2024 20:12:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:16:49 +0000   Mon, 08 Jan 2024 20:12:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    addons-117367
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 4cb9392fca044ee59d00015b57834e9d
	  System UUID:                4cb9392f-ca04-4ee5-9d00-015b57834e9d
	  Boot ID:                    847e6b41-a81b-4d83-b7d9-4a292f7413fd
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-77tjj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-d4c87556c-gtl49                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  headlamp                    headlamp-7ddfbb94ff-8zh4p                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 coredns-5dd5756b68-l64bf                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m12s
	  kube-system                 etcd-addons-117367                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-apiserver-addons-117367             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-controller-manager-addons-117367    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-proxy-x9wjt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-scheduler-addons-117367             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-f94t4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m34s (x8 over 6m34s)  kubelet          Node addons-117367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x8 over 6m34s)  kubelet          Node addons-117367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x7 over 6m34s)  kubelet          Node addons-117367 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m25s                  kubelet          Node addons-117367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s                  kubelet          Node addons-117367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s                  kubelet          Node addons-117367 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m24s                  kubelet          Node addons-117367 status is now: NodeReady
	  Normal  RegisteredNode           6m13s                  node-controller  Node addons-117367 event: Registered Node addons-117367 in Controller
	
	
	==> dmesg <==
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.021339] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.774800] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.109776] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.152387] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.112133] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.226493] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +9.601152] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[  +9.280586] systemd-fstab-generator[1247]: Ignoring "noauto" for root device
	[Jan 8 20:13] kauditd_printk_skb: 59 callbacks suppressed
	[  +8.676723] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.730655] kauditd_printk_skb: 16 callbacks suppressed
	[Jan 8 20:14] kauditd_printk_skb: 18 callbacks suppressed
	[Jan 8 20:15] kauditd_printk_skb: 22 callbacks suppressed
	[  +8.776093] kauditd_printk_skb: 18 callbacks suppressed
	[ +25.002394] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.178205] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.488578] kauditd_printk_skb: 21 callbacks suppressed
	[Jan 8 20:16] kauditd_printk_skb: 5 callbacks suppressed
	[ +23.666280] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.876547] kauditd_printk_skb: 39 callbacks suppressed
	[  +8.888108] kauditd_printk_skb: 12 callbacks suppressed
	[Jan 8 20:19] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [098e44261ab40a537dd353418f66a0804b716c5b0434e1219295cf94806e685c] <==
	{"level":"warn","ts":"2024-01-08T20:15:46.589252Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.334516ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T20:15:46.58927Z","caller":"traceutil/trace.go:171","msg":"trace[9723325] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1347; }","duration":"151.355268ms","start":"2024-01-08T20:15:46.43791Z","end":"2024-01-08T20:15:46.589266Z","steps":["trace[9723325] 'agreement among raft nodes before linearized reading'  (duration: 151.320479ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:16:00.923785Z","caller":"traceutil/trace.go:171","msg":"trace[1237824630] linearizableReadLoop","detail":"{readStateIndex:1567; appliedIndex:1566; }","duration":"164.44389ms","start":"2024-01-08T20:16:00.759328Z","end":"2024-01-08T20:16:00.923772Z","steps":["trace[1237824630] 'read index received'  (duration: 164.3152ms)","trace[1237824630] 'applied index is now lower than readState.Index'  (duration: 126.224µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T20:16:00.923981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.651896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:2966"}
	{"level":"info","ts":"2024-01-08T20:16:00.924035Z","caller":"traceutil/trace.go:171","msg":"trace[1003703238] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:1506; }","duration":"164.723093ms","start":"2024-01-08T20:16:00.759304Z","end":"2024-01-08T20:16:00.924027Z","steps":["trace[1003703238] 'agreement among raft nodes before linearized reading'  (duration: 164.597201ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:16:00.924525Z","caller":"traceutil/trace.go:171","msg":"trace[749608275] transaction","detail":"{read_only:false; response_revision:1506; number_of_response:1; }","duration":"172.773192ms","start":"2024-01-08T20:16:00.751614Z","end":"2024-01-08T20:16:00.924387Z","steps":["trace[749608275] 'process raft request'  (duration: 172.072263ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:16:06.030548Z","caller":"traceutil/trace.go:171","msg":"trace[380378211] transaction","detail":"{read_only:false; response_revision:1544; number_of_response:1; }","duration":"110.125055ms","start":"2024-01-08T20:16:05.920403Z","end":"2024-01-08T20:16:06.030528Z","steps":["trace[380378211] 'process raft request'  (duration: 110.000775ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:16:25.543901Z","caller":"traceutil/trace.go:171","msg":"trace[1798710805] linearizableReadLoop","detail":"{readStateIndex:1675; appliedIndex:1674; }","duration":"432.700069ms","start":"2024-01-08T20:16:25.111175Z","end":"2024-01-08T20:16:25.543875Z","steps":["trace[1798710805] 'read index received'  (duration: 432.403881ms)","trace[1798710805] 'applied index is now lower than readState.Index'  (duration: 295.688µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T20:16:25.544294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"395.1982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/pvc-c8a6c247-5d06-4b89-8f77-d084297eda51\" ","response":"range_response_count:1 size:1252"}
	{"level":"info","ts":"2024-01-08T20:16:25.544373Z","caller":"traceutil/trace.go:171","msg":"trace[1400396542] range","detail":"{range_begin:/registry/persistentvolumes/pvc-c8a6c247-5d06-4b89-8f77-d084297eda51; range_end:; response_count:1; response_revision:1609; }","duration":"395.297608ms","start":"2024-01-08T20:16:25.149065Z","end":"2024-01-08T20:16:25.544363Z","steps":["trace[1400396542] 'agreement among raft nodes before linearized reading'  (duration: 395.161095ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:16:25.544427Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T20:16:25.14905Z","time spent":"395.365407ms","remote":"127.0.0.1:56198","response type":"/etcdserverpb.KV/Range","request count":0,"request size":70,"response count":1,"response size":1275,"request content":"key:\"/registry/persistentvolumes/pvc-c8a6c247-5d06-4b89-8f77-d084297eda51\" "}
	{"level":"warn","ts":"2024-01-08T20:16:25.544517Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"433.325395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-01-08T20:16:25.546876Z","caller":"traceutil/trace.go:171","msg":"trace[725278457] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1609; }","duration":"435.784345ms","start":"2024-01-08T20:16:25.111081Z","end":"2024-01-08T20:16:25.546865Z","steps":["trace[725278457] 'agreement among raft nodes before linearized reading'  (duration: 433.320552ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:16:25.546946Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T20:16:25.111066Z","time spent":"435.864079ms","remote":"127.0.0.1:56260","response type":"/etcdserverpb.KV/Range","request count":0,"request size":82,"response count":8,"response size":30,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true "}
	{"level":"warn","ts":"2024-01-08T20:16:25.544684Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.986762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-01-08T20:16:25.544754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"395.104063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/hpvc-restore.17a878851704eaf4\" ","response":"range_response_count:1 size:911"}
	{"level":"info","ts":"2024-01-08T20:16:25.547623Z","caller":"traceutil/trace.go:171","msg":"trace[1796493565] range","detail":"{range_begin:/registry/events/default/hpvc-restore.17a878851704eaf4; range_end:; response_count:1; response_revision:1609; }","duration":"397.969913ms","start":"2024-01-08T20:16:25.149644Z","end":"2024-01-08T20:16:25.547614Z","steps":["trace[1796493565] 'agreement among raft nodes before linearized reading'  (duration: 395.08211ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T20:16:25.547573Z","caller":"traceutil/trace.go:171","msg":"trace[795647041] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1609; }","duration":"110.520593ms","start":"2024-01-08T20:16:25.436689Z","end":"2024-01-08T20:16:25.54721Z","steps":["trace[795647041] 'agreement among raft nodes before linearized reading'  (duration: 107.96783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:16:25.547669Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T20:16:25.149637Z","time spent":"398.022581ms","remote":"127.0.0.1:56182","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":934,"request content":"key:\"/registry/events/default/hpvc-restore.17a878851704eaf4\" "}
	{"level":"info","ts":"2024-01-08T20:16:25.544373Z","caller":"traceutil/trace.go:171","msg":"trace[1440756841] transaction","detail":"{read_only:false; response_revision:1609; number_of_response:1; }","duration":"458.967315ms","start":"2024-01-08T20:16:25.085395Z","end":"2024-01-08T20:16:25.544363Z","steps":["trace[1440756841] 'process raft request'  (duration: 458.317263ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:16:25.548287Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T20:16:25.085358Z","time spent":"462.882526ms","remote":"127.0.0.1:56202","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1604 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-08T20:16:35.76486Z","caller":"traceutil/trace.go:171","msg":"trace[2026676294] linearizableReadLoop","detail":"{readStateIndex:1754; appliedIndex:1753; }","duration":"118.858502ms","start":"2024-01-08T20:16:35.645961Z","end":"2024-01-08T20:16:35.76482Z","steps":["trace[2026676294] 'read index received'  (duration: 118.647163ms)","trace[2026676294] 'applied index is now lower than readState.Index'  (duration: 210.741µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T20:16:35.765044Z","caller":"traceutil/trace.go:171","msg":"trace[848350321] transaction","detail":"{read_only:false; response_revision:1684; number_of_response:1; }","duration":"149.089245ms","start":"2024-01-08T20:16:35.615945Z","end":"2024-01-08T20:16:35.765034Z","steps":["trace[848350321] 'process raft request'  (duration: 148.737451ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T20:16:35.76535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.385223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-08T20:16:35.765424Z","caller":"traceutil/trace.go:171","msg":"trace[1147890641] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1684; }","duration":"119.472851ms","start":"2024-01-08T20:16:35.645937Z","end":"2024-01-08T20:16:35.76541Z","steps":["trace[1147890641] 'agreement among raft nodes before linearized reading'  (duration: 119.356133ms)"],"step_count":1}
	
	
	==> gcp-auth [18a384b0623f17c3f3a0e4132f5033cfdaa45bbe46f8a92d90fbdb3ad4417945] <==
	2024/01/08 20:15:35 GCP Auth Webhook started!
	2024/01/08 20:15:36 Ready to marshal response ...
	2024/01/08 20:15:36 Ready to write response ...
	2024/01/08 20:15:36 Ready to marshal response ...
	2024/01/08 20:15:36 Ready to write response ...
	2024/01/08 20:15:47 Ready to marshal response ...
	2024/01/08 20:15:47 Ready to write response ...
	2024/01/08 20:15:49 Ready to marshal response ...
	2024/01/08 20:15:49 Ready to write response ...
	2024/01/08 20:15:52 Ready to marshal response ...
	2024/01/08 20:15:52 Ready to write response ...
	2024/01/08 20:15:57 Ready to marshal response ...
	2024/01/08 20:15:57 Ready to write response ...
	2024/01/08 20:15:57 Ready to marshal response ...
	2024/01/08 20:15:57 Ready to write response ...
	2024/01/08 20:15:57 Ready to marshal response ...
	2024/01/08 20:15:57 Ready to write response ...
	2024/01/08 20:16:07 Ready to marshal response ...
	2024/01/08 20:16:07 Ready to write response ...
	2024/01/08 20:16:27 Ready to marshal response ...
	2024/01/08 20:16:27 Ready to write response ...
	2024/01/08 20:16:34 Ready to marshal response ...
	2024/01/08 20:16:34 Ready to write response ...
	2024/01/08 20:18:57 Ready to marshal response ...
	2024/01/08 20:18:57 Ready to write response ...
	
	
	==> kernel <==
	 20:19:08 up 7 min,  0 users,  load average: 0.47, 1.59, 0.99
	Linux addons-117367 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [1e5c2b45dba1f5259b11e3127696e699530ed0339034a83e92321b0c9d6bcf2b] <==
	I0108 20:16:38.358813       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0108 20:16:39.404906       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0108 20:16:46.035364       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:16:46.035433       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:16:46.043934       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:16:46.044051       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:16:46.056490       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:16:46.057667       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:16:46.093276       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:16:46.093340       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:16:46.094046       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:16:46.094072       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:16:46.094361       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:16:46.094422       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:16:46.114088       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:16:46.114272       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0108 20:16:46.129674       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0108 20:16:46.129771       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0108 20:16:47.095261       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0108 20:16:47.129749       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0108 20:16:47.144540       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0108 20:16:49.653745       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0108 20:18:57.995158       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.223.76"}
	E0108 20:19:00.558651       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0108 20:19:03.480309       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [b97af58a89d778f050bf5139f8963bccaa8d3f715a1228b7527d771e00a16255] <==
	W0108 20:17:57.765698       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:17:57.765741       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:18:02.805470       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:18:02.805586       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:18:06.543957       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:18:06.544015       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:18:36.212909       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:18:36.212999       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:18:37.824070       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:18:37.824180       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0108 20:18:38.895606       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:18:38.895712       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0108 20:18:57.694962       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0108 20:18:57.782530       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-77tjj"
	I0108 20:18:57.798462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="103.503069ms"
	I0108 20:18:57.818345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="17.591746ms"
	I0108 20:18:57.818537       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.855µs"
	I0108 20:18:57.820863       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.66µs"
	I0108 20:19:00.378383       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0108 20:19:00.385656       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="4.443µs"
	I0108 20:19:00.402383       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0108 20:19:01.468857       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="13.81098ms"
	I0108 20:19:01.469072       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.624µs"
	W0108 20:19:05.962950       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0108 20:19:05.963018       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [2e0ed0766771f3274de566fc55b42267b6d71f7783a70a627c8595e9df98ce6a] <==
	I0108 20:13:14.528296       1 server_others.go:69] "Using iptables proxy"
	I0108 20:13:14.797401       1 node.go:141] Successfully retrieved node IP: 192.168.39.205
	I0108 20:13:16.391388       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 20:13:16.391464       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 20:13:16.653980       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:13:16.654201       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:13:16.654716       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:13:16.654732       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:13:16.660873       1 config.go:188] "Starting service config controller"
	I0108 20:13:16.880578       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:13:16.660979       1 config.go:315] "Starting node config controller"
	I0108 20:13:16.724275       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:13:16.889559       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:13:16.889575       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 20:13:16.889671       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:13:16.890572       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:13:16.890584       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [71ad9f9df59f7718d7034f6cc0ab4699ca278911099b5a1b43c1d622c92521a8] <==
	W0108 20:12:40.357365       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:12:40.357401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:12:40.357429       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:12:40.357449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 20:12:40.358329       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:12:40.358373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 20:12:41.196553       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:12:41.196604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:12:41.213058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:12:41.213250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 20:12:41.336920       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:12:41.337011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 20:12:41.435670       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:12:41.435752       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 20:12:41.435904       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:12:41.435936       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 20:12:41.483634       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 20:12:41.483764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 20:12:41.486439       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:12:41.486461       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 20:12:41.632246       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:12:41.632361       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 20:12:41.634444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:12:41.634493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0108 20:12:43.448797       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 20:12:11 UTC, ends at Mon 2024-01-08 20:19:09 UTC. --
	Jan 08 20:18:57 addons-117367 kubelet[1254]: I0108 20:18:57.796755    1254 memory_manager.go:346] "RemoveStaleState removing state" podUID="d582c78c-8bde-44a3-8199-7e4f6eb46717" containerName="csi-snapshotter"
	Jan 08 20:18:57 addons-117367 kubelet[1254]: I0108 20:18:57.796763    1254 memory_manager.go:346] "RemoveStaleState removing state" podUID="784b0897-356d-4dbd-a2f5-1f25d04e1e4c" containerName="task-pv-container"
	Jan 08 20:18:57 addons-117367 kubelet[1254]: I0108 20:18:57.892545    1254 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fbc3b2c9-defb-4a9a-860d-c6c897d03e9f-gcp-creds\") pod \"hello-world-app-5d77478584-77tjj\" (UID: \"fbc3b2c9-defb-4a9a-860d-c6c897d03e9f\") " pod="default/hello-world-app-5d77478584-77tjj"
	Jan 08 20:18:57 addons-117367 kubelet[1254]: I0108 20:18:57.892618    1254 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49hnr\" (UniqueName: \"kubernetes.io/projected/fbc3b2c9-defb-4a9a-860d-c6c897d03e9f-kube-api-access-49hnr\") pod \"hello-world-app-5d77478584-77tjj\" (UID: \"fbc3b2c9-defb-4a9a-860d-c6c897d03e9f\") " pod="default/hello-world-app-5d77478584-77tjj"
	Jan 08 20:18:59 addons-117367 kubelet[1254]: I0108 20:18:59.303827    1254 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ls24\" (UniqueName: \"kubernetes.io/projected/fd6398e5-6348-4edb-b263-d0f338f0441b-kube-api-access-5ls24\") pod \"fd6398e5-6348-4edb-b263-d0f338f0441b\" (UID: \"fd6398e5-6348-4edb-b263-d0f338f0441b\") "
	Jan 08 20:18:59 addons-117367 kubelet[1254]: I0108 20:18:59.308863    1254 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6398e5-6348-4edb-b263-d0f338f0441b-kube-api-access-5ls24" (OuterVolumeSpecName: "kube-api-access-5ls24") pod "fd6398e5-6348-4edb-b263-d0f338f0441b" (UID: "fd6398e5-6348-4edb-b263-d0f338f0441b"). InnerVolumeSpecName "kube-api-access-5ls24". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 20:18:59 addons-117367 kubelet[1254]: I0108 20:18:59.405023    1254 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5ls24\" (UniqueName: \"kubernetes.io/projected/fd6398e5-6348-4edb-b263-d0f338f0441b-kube-api-access-5ls24\") on node \"addons-117367\" DevicePath \"\""
	Jan 08 20:18:59 addons-117367 kubelet[1254]: I0108 20:18:59.416439    1254 scope.go:117] "RemoveContainer" containerID="1ac8abec4d845a54f9ee16c899023b492e3352cef02edc396dc8a589a712d8b3"
	Jan 08 20:18:59 addons-117367 kubelet[1254]: I0108 20:18:59.471790    1254 scope.go:117] "RemoveContainer" containerID="1ac8abec4d845a54f9ee16c899023b492e3352cef02edc396dc8a589a712d8b3"
	Jan 08 20:18:59 addons-117367 kubelet[1254]: E0108 20:18:59.472487    1254 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ac8abec4d845a54f9ee16c899023b492e3352cef02edc396dc8a589a712d8b3\": container with ID starting with 1ac8abec4d845a54f9ee16c899023b492e3352cef02edc396dc8a589a712d8b3 not found: ID does not exist" containerID="1ac8abec4d845a54f9ee16c899023b492e3352cef02edc396dc8a589a712d8b3"
	Jan 08 20:18:59 addons-117367 kubelet[1254]: I0108 20:18:59.472544    1254 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ac8abec4d845a54f9ee16c899023b492e3352cef02edc396dc8a589a712d8b3"} err="failed to get container status \"1ac8abec4d845a54f9ee16c899023b492e3352cef02edc396dc8a589a712d8b3\": rpc error: code = NotFound desc = could not find container \"1ac8abec4d845a54f9ee16c899023b492e3352cef02edc396dc8a589a712d8b3\": container with ID starting with 1ac8abec4d845a54f9ee16c899023b492e3352cef02edc396dc8a589a712d8b3 not found: ID does not exist"
	Jan 08 20:18:59 addons-117367 kubelet[1254]: I0108 20:18:59.962637    1254 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fd6398e5-6348-4edb-b263-d0f338f0441b" path="/var/lib/kubelet/pods/fd6398e5-6348-4edb-b263-d0f338f0441b/volumes"
	Jan 08 20:19:01 addons-117367 kubelet[1254]: I0108 20:19:01.961572    1254 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0f586cb5-986d-4cfd-9a98-c4b99fe2219b" path="/var/lib/kubelet/pods/0f586cb5-986d-4cfd-9a98-c4b99fe2219b/volumes"
	Jan 08 20:19:01 addons-117367 kubelet[1254]: I0108 20:19:01.961993    1254 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b5f1d968-0cf2-4e09-9a6d-802c60e50e8a" path="/var/lib/kubelet/pods/b5f1d968-0cf2-4e09-9a6d-802c60e50e8a/volumes"
	Jan 08 20:19:03 addons-117367 kubelet[1254]: I0108 20:19:03.844938    1254 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/247441d9-73f2-480a-8ee9-697095a4d289-webhook-cert\") pod \"247441d9-73f2-480a-8ee9-697095a4d289\" (UID: \"247441d9-73f2-480a-8ee9-697095a4d289\") "
	Jan 08 20:19:03 addons-117367 kubelet[1254]: I0108 20:19:03.845632    1254 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvsgc\" (UniqueName: \"kubernetes.io/projected/247441d9-73f2-480a-8ee9-697095a4d289-kube-api-access-gvsgc\") pod \"247441d9-73f2-480a-8ee9-697095a4d289\" (UID: \"247441d9-73f2-480a-8ee9-697095a4d289\") "
	Jan 08 20:19:03 addons-117367 kubelet[1254]: I0108 20:19:03.850255    1254 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247441d9-73f2-480a-8ee9-697095a4d289-kube-api-access-gvsgc" (OuterVolumeSpecName: "kube-api-access-gvsgc") pod "247441d9-73f2-480a-8ee9-697095a4d289" (UID: "247441d9-73f2-480a-8ee9-697095a4d289"). InnerVolumeSpecName "kube-api-access-gvsgc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 20:19:03 addons-117367 kubelet[1254]: I0108 20:19:03.850703    1254 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247441d9-73f2-480a-8ee9-697095a4d289-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "247441d9-73f2-480a-8ee9-697095a4d289" (UID: "247441d9-73f2-480a-8ee9-697095a4d289"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:19:03 addons-117367 kubelet[1254]: I0108 20:19:03.946575    1254 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/247441d9-73f2-480a-8ee9-697095a4d289-webhook-cert\") on node \"addons-117367\" DevicePath \"\""
	Jan 08 20:19:03 addons-117367 kubelet[1254]: I0108 20:19:03.946647    1254 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gvsgc\" (UniqueName: \"kubernetes.io/projected/247441d9-73f2-480a-8ee9-697095a4d289-kube-api-access-gvsgc\") on node \"addons-117367\" DevicePath \"\""
	Jan 08 20:19:03 addons-117367 kubelet[1254]: I0108 20:19:03.961088    1254 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="247441d9-73f2-480a-8ee9-697095a4d289" path="/var/lib/kubelet/pods/247441d9-73f2-480a-8ee9-697095a4d289/volumes"
	Jan 08 20:19:04 addons-117367 kubelet[1254]: I0108 20:19:04.451434    1254 scope.go:117] "RemoveContainer" containerID="abbd9e4cfb34a12699e65fcc156f655639a0c90d4758abaafc0f4600d10abd03"
	Jan 08 20:19:04 addons-117367 kubelet[1254]: I0108 20:19:04.475217    1254 scope.go:117] "RemoveContainer" containerID="abbd9e4cfb34a12699e65fcc156f655639a0c90d4758abaafc0f4600d10abd03"
	Jan 08 20:19:04 addons-117367 kubelet[1254]: E0108 20:19:04.475960    1254 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abbd9e4cfb34a12699e65fcc156f655639a0c90d4758abaafc0f4600d10abd03\": container with ID starting with abbd9e4cfb34a12699e65fcc156f655639a0c90d4758abaafc0f4600d10abd03 not found: ID does not exist" containerID="abbd9e4cfb34a12699e65fcc156f655639a0c90d4758abaafc0f4600d10abd03"
	Jan 08 20:19:04 addons-117367 kubelet[1254]: I0108 20:19:04.476001    1254 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abbd9e4cfb34a12699e65fcc156f655639a0c90d4758abaafc0f4600d10abd03"} err="failed to get container status \"abbd9e4cfb34a12699e65fcc156f655639a0c90d4758abaafc0f4600d10abd03\": rpc error: code = NotFound desc = could not find container \"abbd9e4cfb34a12699e65fcc156f655639a0c90d4758abaafc0f4600d10abd03\": container with ID starting with abbd9e4cfb34a12699e65fcc156f655639a0c90d4758abaafc0f4600d10abd03 not found: ID does not exist"
	
	
	==> storage-provisioner [e6543c3f6db3d3b8d5d82122c24d9d732414b638b831188a803297f94464a0fc] <==
	I0108 20:13:20.836323       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:13:20.853499       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:13:20.853538       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:13:20.877202       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:13:20.879896       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-117367_d7b8bb74-d2bc-4541-9738-6daa9aaae7c6!
	I0108 20:13:20.881809       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f5361208-0d6b-4501-ad79-9c7f8dd1efb0", APIVersion:"v1", ResourceVersion:"886", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-117367_d7b8bb74-d2bc-4541-9738-6daa9aaae7c6 became leader
	I0108 20:13:21.120245       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-117367_d7b8bb74-d2bc-4541-9738-6daa9aaae7c6!
	E0108 20:16:37.885464       1 controller.go:1050] claim "bf96c3e3-7dbb-430a-b4b7-b7f250a8fa16" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-117367 -n addons-117367
helpers_test.go:261: (dbg) Run:  kubectl --context addons-117367 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.01s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-117367
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-117367: exit status 82 (2m1.440800784s)

                                                
                                                
-- stdout --
	* Stopping node "addons-117367"  ...
	* Stopping node "addons-117367"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-117367" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-117367
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-117367: exit status 11 (21.48708738s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-117367" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-117367
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-117367: exit status 11 (6.146594959s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-117367" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-117367
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-117367: exit status 11 (6.141538654s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.205:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-117367" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (17.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.512333476s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-776422
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image load --daemon gcr.io/google-containers/addon-resizer:functional-776422 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 image load --daemon gcr.io/google-containers/addon-resizer:functional-776422 --alsologtostderr: (14.270117346s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-776422" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (17.19s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (175.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-056019 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-056019 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.011052237s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-056019 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-056019 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f5b00b09-3d7c-4888-82a4-47b8c33733ca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f5b00b09-3d7c-4888-82a4-47b8c33733ca] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 14.005808216s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056019 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0108 20:30:36.429220   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:31:04.115561   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:31:04.517228   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:04.522534   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:04.532853   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:04.553280   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:04.593651   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:04.674055   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:04.834515   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:05.155284   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:05.796238   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:07.076573   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:09.638382   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:14.758678   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:24.999325   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:31:45.480123   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-056019 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.411148249s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-056019 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056019 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.48
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056019 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-056019 addons disable ingress-dns --alsologtostderr -v=1: (8.745497626s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056019 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-056019 addons disable ingress --alsologtostderr -v=1: (7.59973452s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-056019 -n ingress-addon-legacy-056019
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056019 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-056019 logs -n 25: (1.293023883s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-776422                                                         | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-776422                                                         | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-776422                                                         | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-776422 image ls                                                | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	| image          | functional-776422 image save                                              | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-776422                  |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-776422 image rm                                                | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-776422                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-776422 image ls                                                | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	| image          | functional-776422 image load                                              | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-776422 image ls                                                | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	| image          | functional-776422 image save --daemon                                     | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-776422                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-776422                                                         | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-776422                                                         | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-776422 ssh pgrep                                               | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-776422                                                         | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-776422 image build -t                                          | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:27 UTC |
	|                | localhost/my-image:functional-776422                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-776422                                                         | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:26 UTC | 08 Jan 24 20:26 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-776422 image ls                                                | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:27 UTC | 08 Jan 24 20:27 UTC |
	| delete         | -p functional-776422                                                      | functional-776422           | jenkins | v1.32.0 | 08 Jan 24 20:27 UTC | 08 Jan 24 20:27 UTC |
	| start          | -p ingress-addon-legacy-056019                                            | ingress-addon-legacy-056019 | jenkins | v1.32.0 | 08 Jan 24 20:27 UTC | 08 Jan 24 20:29 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-056019                                               | ingress-addon-legacy-056019 | jenkins | v1.32.0 | 08 Jan 24 20:29 UTC | 08 Jan 24 20:29 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-056019                                               | ingress-addon-legacy-056019 | jenkins | v1.32.0 | 08 Jan 24 20:29 UTC | 08 Jan 24 20:29 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-056019                                               | ingress-addon-legacy-056019 | jenkins | v1.32.0 | 08 Jan 24 20:29 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-056019 ip                                            | ingress-addon-legacy-056019 | jenkins | v1.32.0 | 08 Jan 24 20:32 UTC | 08 Jan 24 20:32 UTC |
	| addons         | ingress-addon-legacy-056019                                               | ingress-addon-legacy-056019 | jenkins | v1.32.0 | 08 Jan 24 20:32 UTC | 08 Jan 24 20:32 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-056019                                               | ingress-addon-legacy-056019 | jenkins | v1.32.0 | 08 Jan 24 20:32 UTC | 08 Jan 24 20:32 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:27:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:27:04.382908   27386 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:27:04.383161   27386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:27:04.383170   27386 out.go:309] Setting ErrFile to fd 2...
	I0108 20:27:04.383174   27386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:27:04.383370   27386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 20:27:04.383965   27386 out.go:303] Setting JSON to false
	I0108 20:27:04.384877   27386 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4148,"bootTime":1704741476,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:27:04.384936   27386 start.go:138] virtualization: kvm guest
	I0108 20:27:04.387557   27386 out.go:177] * [ingress-addon-legacy-056019] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:27:04.389127   27386 notify.go:220] Checking for updates...
	I0108 20:27:04.389154   27386 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:27:04.390440   27386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:27:04.391769   27386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:27:04.392982   27386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:27:04.394111   27386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:27:04.395437   27386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:27:04.397148   27386 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:27:04.432790   27386 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 20:27:04.434657   27386 start.go:298] selected driver: kvm2
	I0108 20:27:04.434672   27386 start.go:902] validating driver "kvm2" against <nil>
	I0108 20:27:04.434683   27386 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:27:04.435392   27386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:27:04.435473   27386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 20:27:04.450576   27386 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 20:27:04.450670   27386 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:27:04.450905   27386 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:27:04.450961   27386 cni.go:84] Creating CNI manager for ""
	I0108 20:27:04.450974   27386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 20:27:04.450983   27386 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 20:27:04.450994   27386 start_flags.go:323] config:
	{Name:ingress-addon-legacy-056019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-056019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:27:04.451128   27386 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:27:04.453569   27386 out.go:177] * Starting control plane node ingress-addon-legacy-056019 in cluster ingress-addon-legacy-056019
	I0108 20:27:04.455348   27386 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 20:27:04.952527   27386 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 20:27:04.952577   27386 cache.go:56] Caching tarball of preloaded images
	I0108 20:27:04.952735   27386 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 20:27:04.955071   27386 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0108 20:27:04.956909   27386 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:27:05.076833   27386 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0108 20:27:22.213934   27386 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:27:22.214031   27386 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:27:23.204058   27386 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0108 20:27:23.204413   27386 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/config.json ...
	I0108 20:27:23.204446   27386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/config.json: {Name:mk052627118c31f42cc0ca05c0716c2b9b0cbf5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:27:23.204611   27386 start.go:365] acquiring machines lock for ingress-addon-legacy-056019: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 20:27:23.204644   27386 start.go:369] acquired machines lock for "ingress-addon-legacy-056019" in 16.576µs
	I0108 20:27:23.204659   27386 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-056019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-056019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:27:23.204733   27386 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 20:27:23.207457   27386 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0108 20:27:23.207588   27386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:27:23.207611   27386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:27:23.221494   27386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37535
	I0108 20:27:23.221938   27386 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:27:23.222497   27386 main.go:141] libmachine: Using API Version  1
	I0108 20:27:23.222518   27386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:27:23.222915   27386 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:27:23.223109   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetMachineName
	I0108 20:27:23.223271   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .DriverName
	I0108 20:27:23.223422   27386 start.go:159] libmachine.API.Create for "ingress-addon-legacy-056019" (driver="kvm2")
	I0108 20:27:23.223459   27386 client.go:168] LocalClient.Create starting
	I0108 20:27:23.223498   27386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem
	I0108 20:27:23.223540   27386 main.go:141] libmachine: Decoding PEM data...
	I0108 20:27:23.223563   27386 main.go:141] libmachine: Parsing certificate...
	I0108 20:27:23.223645   27386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem
	I0108 20:27:23.223672   27386 main.go:141] libmachine: Decoding PEM data...
	I0108 20:27:23.223693   27386 main.go:141] libmachine: Parsing certificate...
	I0108 20:27:23.223726   27386 main.go:141] libmachine: Running pre-create checks...
	I0108 20:27:23.223742   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .PreCreateCheck
	I0108 20:27:23.224135   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetConfigRaw
	I0108 20:27:23.224547   27386 main.go:141] libmachine: Creating machine...
	I0108 20:27:23.224565   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .Create
	I0108 20:27:23.224707   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Creating KVM machine...
	I0108 20:27:23.226235   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found existing default KVM network
	I0108 20:27:23.227184   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:23.226971   27456 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a10}
	I0108 20:27:23.232998   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | trying to create private KVM network mk-ingress-addon-legacy-056019 192.168.39.0/24...
	I0108 20:27:23.308272   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | private KVM network mk-ingress-addon-legacy-056019 192.168.39.0/24 created
	I0108 20:27:23.308309   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:23.308211   27456 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:27:23.308328   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Setting up store path in /home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019 ...
	I0108 20:27:23.308346   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Building disk image from file:///home/jenkins/minikube-integration/17907-10702/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 20:27:23.308410   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Downloading /home/jenkins/minikube-integration/17907-10702/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17907-10702/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 20:27:23.517542   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:23.517388   27456 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/id_rsa...
	I0108 20:27:23.834075   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:23.833915   27456 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/ingress-addon-legacy-056019.rawdisk...
	I0108 20:27:23.834109   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Writing magic tar header
	I0108 20:27:23.834129   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Writing SSH key tar header
	I0108 20:27:23.834142   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:23.834043   27456 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019 ...
	I0108 20:27:23.834161   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019
	I0108 20:27:23.834181   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019 (perms=drwx------)
	I0108 20:27:23.834195   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube/machines
	I0108 20:27:23.834206   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:27:23.834214   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702
	I0108 20:27:23.834254   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 20:27:23.834298   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Checking permissions on dir: /home/jenkins
	I0108 20:27:23.834315   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube/machines (perms=drwxr-xr-x)
	I0108 20:27:23.834334   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube (perms=drwxr-xr-x)
	I0108 20:27:23.834354   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702 (perms=drwxrwxr-x)
	I0108 20:27:23.834372   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Checking permissions on dir: /home
	I0108 20:27:23.834385   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Skipping /home - not owner
	I0108 20:27:23.834398   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 20:27:23.834408   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 20:27:23.834425   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Creating domain...
	I0108 20:27:23.835399   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) define libvirt domain using xml: 
	I0108 20:27:23.835424   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) <domain type='kvm'>
	I0108 20:27:23.835438   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   <name>ingress-addon-legacy-056019</name>
	I0108 20:27:23.835447   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   <memory unit='MiB'>4096</memory>
	I0108 20:27:23.835460   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   <vcpu>2</vcpu>
	I0108 20:27:23.835477   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   <features>
	I0108 20:27:23.835487   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <acpi/>
	I0108 20:27:23.835499   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <apic/>
	I0108 20:27:23.835508   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <pae/>
	I0108 20:27:23.835522   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     
	I0108 20:27:23.835539   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   </features>
	I0108 20:27:23.835555   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   <cpu mode='host-passthrough'>
	I0108 20:27:23.835566   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   
	I0108 20:27:23.835575   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   </cpu>
	I0108 20:27:23.835583   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   <os>
	I0108 20:27:23.835595   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <type>hvm</type>
	I0108 20:27:23.835604   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <boot dev='cdrom'/>
	I0108 20:27:23.835610   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <boot dev='hd'/>
	I0108 20:27:23.835618   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <bootmenu enable='no'/>
	I0108 20:27:23.835648   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   </os>
	I0108 20:27:23.835671   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   <devices>
	I0108 20:27:23.835691   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <disk type='file' device='cdrom'>
	I0108 20:27:23.835710   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <source file='/home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/boot2docker.iso'/>
	I0108 20:27:23.835726   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <target dev='hdc' bus='scsi'/>
	I0108 20:27:23.835744   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <readonly/>
	I0108 20:27:23.835759   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     </disk>
	I0108 20:27:23.835773   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <disk type='file' device='disk'>
	I0108 20:27:23.835790   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 20:27:23.835810   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <source file='/home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/ingress-addon-legacy-056019.rawdisk'/>
	I0108 20:27:23.835823   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <target dev='hda' bus='virtio'/>
	I0108 20:27:23.835835   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     </disk>
	I0108 20:27:23.835848   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <interface type='network'>
	I0108 20:27:23.835865   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <source network='mk-ingress-addon-legacy-056019'/>
	I0108 20:27:23.835879   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <model type='virtio'/>
	I0108 20:27:23.835894   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     </interface>
	I0108 20:27:23.835919   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <interface type='network'>
	I0108 20:27:23.835933   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <source network='default'/>
	I0108 20:27:23.835948   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <model type='virtio'/>
	I0108 20:27:23.835965   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     </interface>
	I0108 20:27:23.835980   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <serial type='pty'>
	I0108 20:27:23.835993   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <target port='0'/>
	I0108 20:27:23.836008   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     </serial>
	I0108 20:27:23.836021   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <console type='pty'>
	I0108 20:27:23.836049   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <target type='serial' port='0'/>
	I0108 20:27:23.836075   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     </console>
	I0108 20:27:23.836109   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     <rng model='virtio'>
	I0108 20:27:23.836128   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)       <backend model='random'>/dev/random</backend>
	I0108 20:27:23.836141   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     </rng>
	I0108 20:27:23.836153   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     
	I0108 20:27:23.836167   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)     
	I0108 20:27:23.836178   27386 main.go:141] libmachine: (ingress-addon-legacy-056019)   </devices>
	I0108 20:27:23.836192   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) </domain>
	I0108 20:27:23.836209   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) 
	I0108 20:27:23.841308   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:ac:97:03 in network default
	I0108 20:27:23.841934   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Ensuring networks are active...
	I0108 20:27:23.841968   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:23.842697   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Ensuring network default is active
	I0108 20:27:23.843201   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Ensuring network mk-ingress-addon-legacy-056019 is active
	I0108 20:27:23.843804   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Getting domain xml...
	I0108 20:27:23.844487   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Creating domain...
	I0108 20:27:25.097798   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Waiting to get IP...
	I0108 20:27:25.098739   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:25.099149   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:25.099175   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:25.099130   27456 retry.go:31] will retry after 255.535102ms: waiting for machine to come up
	I0108 20:27:25.357077   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:25.357547   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:25.357613   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:25.357531   27456 retry.go:31] will retry after 248.98523ms: waiting for machine to come up
	I0108 20:27:25.608181   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:25.608617   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:25.608649   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:25.608583   27456 retry.go:31] will retry after 447.498128ms: waiting for machine to come up
	I0108 20:27:26.057416   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:26.057869   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:26.057900   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:26.057809   27456 retry.go:31] will retry after 517.333264ms: waiting for machine to come up
	I0108 20:27:26.576350   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:26.576725   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:26.576745   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:26.576697   27456 retry.go:31] will retry after 566.533485ms: waiting for machine to come up
	I0108 20:27:27.144548   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:27.145149   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:27.145189   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:27.145084   27456 retry.go:31] will retry after 574.752439ms: waiting for machine to come up
	I0108 20:27:27.721986   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:27.722292   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:27.722317   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:27.722232   27456 retry.go:31] will retry after 932.538467ms: waiting for machine to come up
	I0108 20:27:28.657109   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:28.657719   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:28.657773   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:28.657698   27456 retry.go:31] will retry after 990.969219ms: waiting for machine to come up
	I0108 20:27:29.650268   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:29.650717   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:29.650799   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:29.650641   27456 retry.go:31] will retry after 1.758165945s: waiting for machine to come up
	I0108 20:27:31.411356   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:31.411784   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:31.411816   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:31.411694   27456 retry.go:31] will retry after 1.418423855s: waiting for machine to come up
	I0108 20:27:32.831467   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:32.831982   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:32.832008   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:32.831921   27456 retry.go:31] will retry after 2.044986026s: waiting for machine to come up
	I0108 20:27:34.878936   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:34.879395   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:34.879425   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:34.879334   27456 retry.go:31] will retry after 3.195475253s: waiting for machine to come up
	I0108 20:27:38.078631   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:38.078992   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:38.079044   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:38.078945   27456 retry.go:31] will retry after 3.161300964s: waiting for machine to come up
	I0108 20:27:41.242421   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:41.243065   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find current IP address of domain ingress-addon-legacy-056019 in network mk-ingress-addon-legacy-056019
	I0108 20:27:41.243097   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | I0108 20:27:41.242970   27456 retry.go:31] will retry after 3.700193084s: waiting for machine to come up
	I0108 20:27:44.945477   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:44.945981   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Found IP for machine: 192.168.39.48
	I0108 20:27:44.946002   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Reserving static IP address...
	I0108 20:27:44.946020   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has current primary IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:44.946348   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-056019", mac: "52:54:00:a9:0d:a8", ip: "192.168.39.48"} in network mk-ingress-addon-legacy-056019
	I0108 20:27:45.025063   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Getting to WaitForSSH function...
	I0108 20:27:45.025099   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Reserved static IP address: 192.168.39.48
	I0108 20:27:45.025114   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Waiting for SSH to be available...
	I0108 20:27:45.027793   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:45.028020   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019
	I0108 20:27:45.028043   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | unable to find defined IP address of network mk-ingress-addon-legacy-056019 interface with MAC address 52:54:00:a9:0d:a8
	I0108 20:27:45.028201   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Using SSH client type: external
	I0108 20:27:45.028235   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Using SSH private key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/id_rsa (-rw-------)
	I0108 20:27:45.028287   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 20:27:45.028305   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | About to run SSH command:
	I0108 20:27:45.028316   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | exit 0
	I0108 20:27:45.032019   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | SSH cmd err, output: exit status 255: 
	I0108 20:27:45.032058   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0108 20:27:45.032073   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | command : exit 0
	I0108 20:27:45.032087   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | err     : exit status 255
	I0108 20:27:45.032142   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | output  : 
	I0108 20:27:48.034136   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Getting to WaitForSSH function...
	I0108 20:27:48.037019   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.037364   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:48.037395   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.037508   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Using SSH client type: external
	I0108 20:27:48.037540   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Using SSH private key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/id_rsa (-rw-------)
	I0108 20:27:48.037573   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 20:27:48.037588   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | About to run SSH command:
	I0108 20:27:48.037598   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | exit 0
	I0108 20:27:48.127956   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | SSH cmd err, output: <nil>: 
	I0108 20:27:48.128221   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) KVM machine creation complete!
	I0108 20:27:48.128510   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetConfigRaw
	I0108 20:27:48.128982   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .DriverName
	I0108 20:27:48.129125   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .DriverName
	I0108 20:27:48.129345   27386 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 20:27:48.129362   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetState
	I0108 20:27:48.130673   27386 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 20:27:48.130690   27386 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 20:27:48.130699   27386 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 20:27:48.130709   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:27:48.133101   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.133460   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:48.133495   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.133574   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:27:48.133709   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:48.133808   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:48.133957   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:27:48.134061   27386 main.go:141] libmachine: Using SSH client type: native
	I0108 20:27:48.134392   27386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0108 20:27:48.134404   27386 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 20:27:48.247770   27386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:27:48.247802   27386 main.go:141] libmachine: Detecting the provisioner...
	I0108 20:27:48.247817   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:27:48.250644   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.251032   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:48.251080   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.251272   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:27:48.251534   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:48.251691   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:48.251832   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:27:48.252063   27386 main.go:141] libmachine: Using SSH client type: native
	I0108 20:27:48.252483   27386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0108 20:27:48.252499   27386 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 20:27:48.369513   27386 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 20:27:48.369588   27386 main.go:141] libmachine: found compatible host: buildroot
	I0108 20:27:48.369599   27386 main.go:141] libmachine: Provisioning with buildroot...
	I0108 20:27:48.369620   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetMachineName
	I0108 20:27:48.369955   27386 buildroot.go:166] provisioning hostname "ingress-addon-legacy-056019"
	I0108 20:27:48.369989   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetMachineName
	I0108 20:27:48.370271   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:27:48.373785   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.374166   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:48.374200   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.374372   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:27:48.374609   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:48.374812   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:48.374997   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:27:48.375275   27386 main.go:141] libmachine: Using SSH client type: native
	I0108 20:27:48.375603   27386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0108 20:27:48.375618   27386 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-056019 && echo "ingress-addon-legacy-056019" | sudo tee /etc/hostname
	I0108 20:27:48.505331   27386 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-056019
	
	I0108 20:27:48.505379   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:27:48.507888   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.508193   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:48.508215   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.508421   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:27:48.508668   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:48.508818   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:48.508940   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:27:48.509101   27386 main.go:141] libmachine: Using SSH client type: native
	I0108 20:27:48.509405   27386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0108 20:27:48.509423   27386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-056019' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-056019/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-056019' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:27:48.632163   27386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:27:48.632191   27386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 20:27:48.632233   27386 buildroot.go:174] setting up certificates
	I0108 20:27:48.632242   27386 provision.go:83] configureAuth start
	I0108 20:27:48.632253   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetMachineName
	I0108 20:27:48.632512   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetIP
	I0108 20:27:48.634793   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.635175   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:48.635227   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.635317   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:27:48.637270   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.637580   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:48.637609   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.637767   27386 provision.go:138] copyHostCerts
	I0108 20:27:48.637801   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:27:48.637845   27386 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 20:27:48.637859   27386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:27:48.637963   27386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 20:27:48.638106   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:27:48.638135   27386 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 20:27:48.638145   27386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:27:48.638178   27386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 20:27:48.638248   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:27:48.638271   27386 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 20:27:48.638281   27386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:27:48.638314   27386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 20:27:48.638385   27386 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-056019 san=[192.168.39.48 192.168.39.48 localhost 127.0.0.1 minikube ingress-addon-legacy-056019]
	I0108 20:27:48.890662   27386 provision.go:172] copyRemoteCerts
	I0108 20:27:48.890715   27386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:27:48.890737   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:27:48.893316   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.893623   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:48.893651   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:48.893778   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:27:48.893991   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:48.894152   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:27:48.894294   27386 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/id_rsa Username:docker}
	I0108 20:27:48.982464   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:27:48.982534   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:27:49.005837   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:27:49.005923   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 20:27:49.028188   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:27:49.028251   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 20:27:49.052118   27386 provision.go:86] duration metric: configureAuth took 419.862541ms
	I0108 20:27:49.052148   27386 buildroot.go:189] setting minikube options for container-runtime
	I0108 20:27:49.052374   27386 config.go:182] Loaded profile config "ingress-addon-legacy-056019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 20:27:49.052452   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:27:49.055405   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.055802   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:49.055837   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.056014   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:27:49.056277   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:49.056470   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:49.056625   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:27:49.056806   27386 main.go:141] libmachine: Using SSH client type: native
	I0108 20:27:49.057219   27386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0108 20:27:49.057238   27386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:27:49.375236   27386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:27:49.375267   27386 main.go:141] libmachine: Checking connection to Docker...
	I0108 20:27:49.375277   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetURL
	I0108 20:27:49.376609   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Using libvirt version 6000000
	I0108 20:27:49.378793   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.379215   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:49.379248   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.379424   27386 main.go:141] libmachine: Docker is up and running!
	I0108 20:27:49.379437   27386 main.go:141] libmachine: Reticulating splines...
	I0108 20:27:49.379443   27386 client.go:171] LocalClient.Create took 26.15597397s
	I0108 20:27:49.379472   27386 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-056019" took 26.156043005s
	I0108 20:27:49.379482   27386 start.go:300] post-start starting for "ingress-addon-legacy-056019" (driver="kvm2")
	I0108 20:27:49.379492   27386 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:27:49.379507   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .DriverName
	I0108 20:27:49.379743   27386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:27:49.379781   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:27:49.381878   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.382298   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:49.382332   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.382491   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:27:49.382684   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:49.382858   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:27:49.382980   27386 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/id_rsa Username:docker}
	I0108 20:27:49.469751   27386 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:27:49.474534   27386 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 20:27:49.474559   27386 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 20:27:49.474622   27386 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 20:27:49.474716   27386 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 20:27:49.474727   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /etc/ssl/certs/178962.pem
	I0108 20:27:49.474825   27386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:27:49.483614   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:27:49.510416   27386 start.go:303] post-start completed in 130.921423ms
	I0108 20:27:49.510464   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetConfigRaw
	I0108 20:27:49.511034   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetIP
	I0108 20:27:49.513396   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.513744   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:49.513768   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.514049   27386 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/config.json ...
	I0108 20:27:49.514215   27386 start.go:128] duration metric: createHost completed in 26.309473361s
	I0108 20:27:49.514237   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:27:49.516237   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.516524   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:49.516556   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.516632   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:27:49.516820   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:49.516987   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:49.517110   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:27:49.517250   27386 main.go:141] libmachine: Using SSH client type: native
	I0108 20:27:49.517581   27386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.48 22 <nil> <nil>}
	I0108 20:27:49.517594   27386 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 20:27:49.632716   27386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704745669.604844781
	
	I0108 20:27:49.632739   27386 fix.go:206] guest clock: 1704745669.604844781
	I0108 20:27:49.632752   27386 fix.go:219] Guest: 2024-01-08 20:27:49.604844781 +0000 UTC Remote: 2024-01-08 20:27:49.514225551 +0000 UTC m=+45.182650227 (delta=90.61923ms)
	I0108 20:27:49.632806   27386 fix.go:190] guest clock delta is within tolerance: 90.61923ms
	I0108 20:27:49.632813   27386 start.go:83] releasing machines lock for "ingress-addon-legacy-056019", held for 26.428162133s
	I0108 20:27:49.632846   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .DriverName
	I0108 20:27:49.633090   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetIP
	I0108 20:27:49.635253   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.635539   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:49.635570   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.635688   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .DriverName
	I0108 20:27:49.636181   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .DriverName
	I0108 20:27:49.636366   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .DriverName
	I0108 20:27:49.636443   27386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:27:49.636482   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:27:49.636555   27386 ssh_runner.go:195] Run: cat /version.json
	I0108 20:27:49.636574   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:27:49.638997   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.639320   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:49.639349   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.639375   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.639490   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:27:49.639655   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:49.639733   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:49.639761   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:49.639892   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:27:49.639892   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:27:49.640041   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:27:49.640041   27386 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/id_rsa Username:docker}
	I0108 20:27:49.640214   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:27:49.640354   27386 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/id_rsa Username:docker}
	I0108 20:27:49.721613   27386 ssh_runner.go:195] Run: systemctl --version
	I0108 20:27:49.746361   27386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:27:49.911694   27386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 20:27:49.917820   27386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 20:27:49.917891   27386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:27:49.933456   27386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 20:27:49.933483   27386 start.go:475] detecting cgroup driver to use...
	I0108 20:27:49.933542   27386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:27:49.951649   27386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:27:49.965950   27386 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:27:49.966014   27386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:27:49.979577   27386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:27:49.992753   27386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:27:50.101102   27386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:27:50.224437   27386 docker.go:233] disabling docker service ...
	I0108 20:27:50.224500   27386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:27:50.239425   27386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:27:50.253162   27386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:27:50.357677   27386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:27:50.458680   27386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:27:50.472202   27386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:27:50.490279   27386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0108 20:27:50.490372   27386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:27:50.499810   27386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:27:50.499869   27386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:27:50.509644   27386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:27:50.518874   27386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:27:50.528344   27386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:27:50.537795   27386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:27:50.546506   27386 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 20:27:50.546564   27386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 20:27:50.559924   27386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:27:50.569481   27386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:27:50.688916   27386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:27:50.862979   27386 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:27:50.863049   27386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:27:50.868774   27386 start.go:543] Will wait 60s for crictl version
	I0108 20:27:50.868831   27386 ssh_runner.go:195] Run: which crictl
	I0108 20:27:50.872986   27386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:27:50.919075   27386 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 20:27:50.919155   27386 ssh_runner.go:195] Run: crio --version
	I0108 20:27:50.970307   27386 ssh_runner.go:195] Run: crio --version
	I0108 20:27:51.018552   27386 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0108 20:27:51.020399   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetIP
	I0108 20:27:51.023068   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:51.023429   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:27:51.023461   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:27:51.023676   27386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 20:27:51.028047   27386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:27:51.041447   27386 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0108 20:27:51.041497   27386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:27:51.078403   27386 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 20:27:51.078478   27386 ssh_runner.go:195] Run: which lz4
	I0108 20:27:51.082747   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 20:27:51.082854   27386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 20:27:51.087371   27386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 20:27:51.087409   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0108 20:27:53.051970   27386 crio.go:444] Took 1.969142 seconds to copy over tarball
	I0108 20:27:53.052053   27386 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 20:27:56.335324   27386 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.283243529s)
	I0108 20:27:56.335358   27386 crio.go:451] Took 3.283354 seconds to extract the tarball
	I0108 20:27:56.335370   27386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 20:27:56.382069   27386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:27:56.434447   27386 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0108 20:27:56.434476   27386 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 20:27:56.434527   27386 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:27:56.434553   27386 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:27:56.434571   27386 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:27:56.434590   27386 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0108 20:27:56.434659   27386 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:27:56.434683   27386 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0108 20:27:56.434555   27386 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:27:56.434574   27386 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:27:56.435715   27386 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:27:56.435748   27386 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:27:56.435715   27386 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:27:56.435768   27386 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0108 20:27:56.435715   27386 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:27:56.435719   27386 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:27:56.435719   27386 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0108 20:27:56.436052   27386 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:27:56.613132   27386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:27:56.648324   27386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:27:56.658573   27386 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0108 20:27:56.658607   27386 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:27:56.658656   27386 ssh_runner.go:195] Run: which crictl
	I0108 20:27:56.664543   27386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0108 20:27:56.666078   27386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:27:56.667577   27386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0108 20:27:56.669531   27386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0108 20:27:56.671587   27386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:27:56.737052   27386 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0108 20:27:56.737099   27386 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:27:56.737150   27386 ssh_runner.go:195] Run: which crictl
	I0108 20:27:56.737160   27386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0108 20:27:56.826684   27386 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0108 20:27:56.826746   27386 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0108 20:27:56.826764   27386 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:27:56.826791   27386 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0108 20:27:56.826809   27386 ssh_runner.go:195] Run: which crictl
	I0108 20:27:56.826747   27386 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0108 20:27:56.826847   27386 ssh_runner.go:195] Run: which crictl
	I0108 20:27:56.826810   27386 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0108 20:27:56.826886   27386 ssh_runner.go:195] Run: which crictl
	I0108 20:27:56.831951   27386 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0108 20:27:56.831993   27386 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0108 20:27:56.832012   27386 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0108 20:27:56.832038   27386 ssh_runner.go:195] Run: which crictl
	I0108 20:27:56.832049   27386 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:27:56.832051   27386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0108 20:27:56.832086   27386 ssh_runner.go:195] Run: which crictl
	I0108 20:27:56.859272   27386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0108 20:27:56.859350   27386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0108 20:27:56.859412   27386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0108 20:27:56.859478   27386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0108 20:27:56.859527   27386 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0108 20:27:56.859543   27386 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0108 20:27:56.934527   27386 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0108 20:27:56.992977   27386 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0108 20:27:57.013180   27386 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0108 20:27:57.013254   27386 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0108 20:27:57.013305   27386 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0108 20:27:57.013318   27386 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0108 20:27:57.430740   27386 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:27:57.573427   27386 cache_images.go:92] LoadImages completed in 1.138935507s
	W0108 20:27:57.573494   27386 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0108 20:27:57.573558   27386 ssh_runner.go:195] Run: crio config
	I0108 20:27:57.641050   27386 cni.go:84] Creating CNI manager for ""
	I0108 20:27:57.641072   27386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 20:27:57.641092   27386 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:27:57.641109   27386 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.48 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-056019 NodeName:ingress-addon-legacy-056019 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0108 20:27:57.641241   27386 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-056019"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:27:57.641317   27386 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-056019 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-056019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:27:57.641369   27386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0108 20:27:57.652373   27386 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:27:57.652439   27386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:27:57.662946   27386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0108 20:27:57.681226   27386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0108 20:27:57.698767   27386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0108 20:27:57.715578   27386 ssh_runner.go:195] Run: grep 192.168.39.48	control-plane.minikube.internal$ /etc/hosts
	I0108 20:27:57.719986   27386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:27:57.734221   27386 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019 for IP: 192.168.39.48
	I0108 20:27:57.734251   27386 certs.go:190] acquiring lock for shared ca certs: {Name:mke01aa9d73e320a9a3907677cf29c75f0fa86d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:27:57.734389   27386 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key
	I0108 20:27:57.734432   27386 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key
	I0108 20:27:57.734484   27386 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.key
	I0108 20:27:57.734496   27386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt with IP's: []
	I0108 20:27:57.837155   27386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt ...
	I0108 20:27:57.837182   27386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: {Name:mk070a8e00e60178941dcb5cc2ae7509d3e00df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:27:57.837386   27386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.key ...
	I0108 20:27:57.837404   27386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.key: {Name:mk70d39105b56de3059e02db54cc8c55c0d72545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:27:57.837504   27386 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.key.1e055435
	I0108 20:27:57.837521   27386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.crt.1e055435 with IP's: [192.168.39.48 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:27:58.219944   27386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.crt.1e055435 ...
	I0108 20:27:58.219976   27386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.crt.1e055435: {Name:mk0d09419366f9dc94d4c0ac61211f8dd8241ce1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:27:58.220179   27386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.key.1e055435 ...
	I0108 20:27:58.220196   27386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.key.1e055435: {Name:mk157eb4e1d02022d1c194188e1cc280362387e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:27:58.220290   27386 certs.go:337] copying /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.crt.1e055435 -> /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.crt
	I0108 20:27:58.220364   27386 certs.go:341] copying /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.key.1e055435 -> /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.key
	I0108 20:27:58.220415   27386 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/proxy-client.key
	I0108 20:27:58.220425   27386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/proxy-client.crt with IP's: []
	I0108 20:27:58.459088   27386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/proxy-client.crt ...
	I0108 20:27:58.459117   27386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/proxy-client.crt: {Name:mk95b83695a2aaaf243021ea1fe39cecb31ede4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:27:58.459285   27386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/proxy-client.key ...
	I0108 20:27:58.459303   27386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/proxy-client.key: {Name:mkaf690a77b128049245a845ae55856064cfe58f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:27:58.459492   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 20:27:58.459533   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 20:27:58.459557   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 20:27:58.459570   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 20:27:58.459586   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:27:58.459602   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:27:58.459615   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:27:58.459627   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:27:58.459676   27386 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem (1338 bytes)
	W0108 20:27:58.459711   27386 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896_empty.pem, impossibly tiny 0 bytes
	I0108 20:27:58.459739   27386 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:27:58.459768   27386 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem (1082 bytes)
	I0108 20:27:58.459794   27386 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:27:58.459833   27386 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem (1675 bytes)
	I0108 20:27:58.459875   27386 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:27:58.459903   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:27:58.459919   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem -> /usr/share/ca-certificates/17896.pem
	I0108 20:27:58.459932   27386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /usr/share/ca-certificates/178962.pem
	I0108 20:27:58.460583   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:27:58.485489   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:27:58.509442   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:27:58.532475   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 20:27:58.556367   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:27:58.580321   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:27:58.603479   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:27:58.626006   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:27:58.649478   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:27:58.672770   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem --> /usr/share/ca-certificates/17896.pem (1338 bytes)
	I0108 20:27:58.695407   27386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /usr/share/ca-certificates/178962.pem (1708 bytes)
	I0108 20:27:58.716961   27386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:27:58.732519   27386 ssh_runner.go:195] Run: openssl version
	I0108 20:27:58.738093   27386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178962.pem && ln -fs /usr/share/ca-certificates/178962.pem /etc/ssl/certs/178962.pem"
	I0108 20:27:58.749078   27386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178962.pem
	I0108 20:27:58.753841   27386 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 20:27:58.753901   27386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178962.pem
	I0108 20:27:58.759662   27386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178962.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:27:58.770583   27386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:27:58.781576   27386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:27:58.786471   27386 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:27:58.786539   27386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:27:58.792605   27386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:27:58.803401   27386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17896.pem && ln -fs /usr/share/ca-certificates/17896.pem /etc/ssl/certs/17896.pem"
	I0108 20:27:58.813970   27386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17896.pem
	I0108 20:27:58.818692   27386 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 20:27:58.818750   27386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17896.pem
	I0108 20:27:58.824476   27386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17896.pem /etc/ssl/certs/51391683.0"
	I0108 20:27:58.835308   27386 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:27:58.840841   27386 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:27:58.840903   27386 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-056019 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-056019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:27:58.840999   27386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 20:27:58.841044   27386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:27:58.884004   27386 cri.go:89] found id: ""
	I0108 20:27:58.884087   27386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:27:58.894352   27386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:27:58.904045   27386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:27:58.913640   27386 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:27:58.913685   27386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0108 20:27:58.969082   27386 kubeadm.go:322] W0108 20:27:58.952585     955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0108 20:27:59.109600   27386 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:28:01.680550   27386 kubeadm.go:322] W0108 20:28:01.666434     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 20:28:01.682530   27386 kubeadm.go:322] W0108 20:28:01.668441     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0108 20:28:11.729333   27386 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0108 20:28:11.729408   27386 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:28:11.729510   27386 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:28:11.729627   27386 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:28:11.729747   27386 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:28:11.729873   27386 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:28:11.730007   27386 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:28:11.730076   27386 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:28:11.730150   27386 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:28:11.733032   27386 out.go:204]   - Generating certificates and keys ...
	I0108 20:28:11.733122   27386 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:28:11.733206   27386 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:28:11.733303   27386 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:28:11.733373   27386 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:28:11.733455   27386 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:28:11.733525   27386 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:28:11.733606   27386 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:28:11.733751   27386 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-056019 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I0108 20:28:11.733848   27386 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:28:11.733992   27386 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-056019 localhost] and IPs [192.168.39.48 127.0.0.1 ::1]
	I0108 20:28:11.734096   27386 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:28:11.734166   27386 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:28:11.734219   27386 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:28:11.734303   27386 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:28:11.734380   27386 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:28:11.734453   27386 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:28:11.734537   27386 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:28:11.734612   27386 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:28:11.734720   27386 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:28:11.736353   27386 out.go:204]   - Booting up control plane ...
	I0108 20:28:11.736454   27386 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:28:11.736542   27386 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:28:11.736648   27386 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:28:11.736749   27386 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:28:11.736908   27386 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:28:11.736992   27386 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503857 seconds
	I0108 20:28:11.737150   27386 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:28:11.737333   27386 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:28:11.737414   27386 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:28:11.737550   27386 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-056019 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 20:28:11.737645   27386 kubeadm.go:322] [bootstrap-token] Using token: haa67v.7ltj3jtpv7lnfjw7
	I0108 20:28:11.738999   27386 out.go:204]   - Configuring RBAC rules ...
	I0108 20:28:11.739127   27386 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:28:11.739226   27386 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:28:11.739378   27386 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:28:11.739519   27386 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:28:11.739696   27386 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:28:11.739776   27386 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:28:11.739879   27386 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:28:11.739920   27386 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:28:11.739958   27386 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:28:11.739964   27386 kubeadm.go:322] 
	I0108 20:28:11.740010   27386 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:28:11.740016   27386 kubeadm.go:322] 
	I0108 20:28:11.740119   27386 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:28:11.740133   27386 kubeadm.go:322] 
	I0108 20:28:11.740168   27386 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:28:11.740240   27386 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:28:11.740288   27386 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:28:11.740294   27386 kubeadm.go:322] 
	I0108 20:28:11.740334   27386 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:28:11.740421   27386 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:28:11.740479   27386 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:28:11.740485   27386 kubeadm.go:322] 
	I0108 20:28:11.740554   27386 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:28:11.740626   27386 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:28:11.740632   27386 kubeadm.go:322] 
	I0108 20:28:11.740720   27386 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token haa67v.7ltj3jtpv7lnfjw7 \
	I0108 20:28:11.740859   27386 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 \
	I0108 20:28:11.740885   27386 kubeadm.go:322]     --control-plane 
	I0108 20:28:11.740889   27386 kubeadm.go:322] 
	I0108 20:28:11.740955   27386 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:28:11.740966   27386 kubeadm.go:322] 
	I0108 20:28:11.741032   27386 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token haa67v.7ltj3jtpv7lnfjw7 \
	I0108 20:28:11.741149   27386 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 
	I0108 20:28:11.741165   27386 cni.go:84] Creating CNI manager for ""
	I0108 20:28:11.741174   27386 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 20:28:11.742923   27386 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 20:28:11.744390   27386 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 20:28:11.766256   27386 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 20:28:11.784558   27386 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:28:11.784656   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:11.784656   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=ingress-addon-legacy-056019 minikube.k8s.io/updated_at=2024_01_08T20_28_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:11.973953   27386 ops.go:34] apiserver oom_adj: -16
	I0108 20:28:11.974230   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:12.475205   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:12.974717   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:13.475142   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:13.974581   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:14.474568   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:14.974254   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:15.474699   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:15.974830   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:16.474612   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:16.975279   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:17.475323   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:17.975096   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:18.474331   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:18.975152   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:19.474292   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:19.974495   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:20.474482   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:20.975189   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:21.475018   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:21.974893   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:22.474879   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:22.974631   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:23.475009   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:23.974976   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:24.474766   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:24.975271   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:25.475156   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:25.974490   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:26.474616   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:26.975270   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:27.475271   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:27.974512   27386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:28:28.090419   27386 kubeadm.go:1088] duration metric: took 16.305826826s to wait for elevateKubeSystemPrivileges.
	I0108 20:28:28.090461   27386 kubeadm.go:406] StartCluster complete in 29.249561671s
	I0108 20:28:28.090483   27386 settings.go:142] acquiring lock: {Name:mk91d3baf51872e4bb0758b94fca7c7249bb9666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:28:28.090579   27386 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:28:28.091370   27386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/kubeconfig: {Name:mkeb2e8a20e31c0c2d5c7e8214a27af3141300ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:28:28.091617   27386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:28:28.091813   27386 config.go:182] Loaded profile config "ingress-addon-legacy-056019": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0108 20:28:28.091751   27386 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 20:28:28.091910   27386 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-056019"
	I0108 20:28:28.091916   27386 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-056019"
	I0108 20:28:28.091926   27386 addons.go:237] Setting addon storage-provisioner=true in "ingress-addon-legacy-056019"
	I0108 20:28:28.091945   27386 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-056019"
	I0108 20:28:28.091983   27386 host.go:66] Checking if "ingress-addon-legacy-056019" exists ...
	I0108 20:28:28.092185   27386 kapi.go:59] client config for ingress-addon-legacy-056019: &rest.Config{Host:"https://192.168.39.48:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:28:28.092416   27386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:28:28.092453   27386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:28:28.092487   27386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:28:28.092517   27386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:28:28.092839   27386 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 20:28:28.107779   27386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35619
	I0108 20:28:28.108184   27386 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:28:28.108693   27386 main.go:141] libmachine: Using API Version  1
	I0108 20:28:28.108723   27386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:28:28.109079   27386 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:28:28.109267   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetState
	I0108 20:28:28.110451   27386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I0108 20:28:28.110941   27386 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:28:28.111482   27386 main.go:141] libmachine: Using API Version  1
	I0108 20:28:28.111501   27386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:28:28.111974   27386 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:28:28.112463   27386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:28:28.112490   27386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:28:28.113078   27386 kapi.go:59] client config for ingress-addon-legacy-056019: &rest.Config{Host:"https://192.168.39.48:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:28:28.113486   27386 addons.go:237] Setting addon default-storageclass=true in "ingress-addon-legacy-056019"
	I0108 20:28:28.113540   27386 host.go:66] Checking if "ingress-addon-legacy-056019" exists ...
	I0108 20:28:28.114104   27386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:28:28.114159   27386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:28:28.127487   27386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43871
	I0108 20:28:28.127918   27386 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:28:28.128375   27386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0108 20:28:28.128438   27386 main.go:141] libmachine: Using API Version  1
	I0108 20:28:28.128461   27386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:28:28.128737   27386 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:28:28.128829   27386 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:28:28.128918   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetState
	I0108 20:28:28.129303   27386 main.go:141] libmachine: Using API Version  1
	I0108 20:28:28.129321   27386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:28:28.129887   27386 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:28:28.130524   27386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:28:28.130563   27386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:28:28.130628   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .DriverName
	I0108 20:28:28.132443   27386 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:28:28.134301   27386 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:28:28.134331   27386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:28:28.134354   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:28:28.137944   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:28:28.138385   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:28:28.138424   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:28:28.138685   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:28:28.138924   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:28:28.139122   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:28:28.139284   27386 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/id_rsa Username:docker}
	I0108 20:28:28.147305   27386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0108 20:28:28.147987   27386 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:28:28.148661   27386 main.go:141] libmachine: Using API Version  1
	I0108 20:28:28.148688   27386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:28:28.149070   27386 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:28:28.149271   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetState
	I0108 20:28:28.151186   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .DriverName
	I0108 20:28:28.151462   27386 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:28:28.151482   27386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:28:28.151503   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHHostname
	I0108 20:28:28.154222   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:28:28.154639   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:0d:a8", ip: ""} in network mk-ingress-addon-legacy-056019: {Iface:virbr1 ExpiryTime:2024-01-08 21:27:39 +0000 UTC Type:0 Mac:52:54:00:a9:0d:a8 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ingress-addon-legacy-056019 Clientid:01:52:54:00:a9:0d:a8}
	I0108 20:28:28.154674   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | domain ingress-addon-legacy-056019 has defined IP address 192.168.39.48 and MAC address 52:54:00:a9:0d:a8 in network mk-ingress-addon-legacy-056019
	I0108 20:28:28.154871   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHPort
	I0108 20:28:28.155095   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHKeyPath
	I0108 20:28:28.155331   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .GetSSHUsername
	I0108 20:28:28.155508   27386 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/ingress-addon-legacy-056019/id_rsa Username:docker}
	I0108 20:28:28.332758   27386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:28:28.350050   27386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:28:28.645855   27386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:28:28.671088   27386 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-056019" context rescaled to 1 replicas
	I0108 20:28:28.671140   27386 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:28:28.673112   27386 out.go:177] * Verifying Kubernetes components...
	I0108 20:28:28.674601   27386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:28:28.818078   27386 main.go:141] libmachine: Making call to close driver server
	I0108 20:28:28.818118   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .Close
	I0108 20:28:28.818451   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Closing plugin on server side
	I0108 20:28:28.818458   27386 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:28:28.818489   27386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:28:28.818500   27386 main.go:141] libmachine: Making call to close driver server
	I0108 20:28:28.818512   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .Close
	I0108 20:28:28.818745   27386 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:28:28.818774   27386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:28:28.818781   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Closing plugin on server side
	I0108 20:28:28.831323   27386 main.go:141] libmachine: Making call to close driver server
	I0108 20:28:28.831346   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .Close
	I0108 20:28:28.831607   27386 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:28:28.831632   27386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:28:29.091297   27386 main.go:141] libmachine: Making call to close driver server
	I0108 20:28:29.091326   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .Close
	I0108 20:28:29.091639   27386 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:28:29.091657   27386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:28:29.091666   27386 main.go:141] libmachine: Making call to close driver server
	I0108 20:28:29.091674   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) Calling .Close
	I0108 20:28:29.091887   27386 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:28:29.091920   27386 main.go:141] libmachine: (ingress-addon-legacy-056019) DBG | Closing plugin on server side
	I0108 20:28:29.091943   27386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:28:29.094248   27386 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0108 20:28:29.096397   27386 addons.go:508] enable addons completed in 1.004648021s: enabled=[default-storageclass storage-provisioner]
	I0108 20:28:29.224282   27386 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 20:28:29.224938   27386 kapi.go:59] client config for ingress-addon-legacy-056019: &rest.Config{Host:"https://192.168.39.48:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:28:29.225238   27386 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-056019" to be "Ready" ...
	I0108 20:28:29.249940   27386 node_ready.go:49] node "ingress-addon-legacy-056019" has status "Ready":"True"
	I0108 20:28:29.249962   27386 node_ready.go:38] duration metric: took 24.704629ms waiting for node "ingress-addon-legacy-056019" to be "Ready" ...
	I0108 20:28:29.249972   27386 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:28:29.259758   27386 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace to be "Ready" ...
	I0108 20:28:31.268239   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:33.766728   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:36.267450   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:38.268226   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:40.767278   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:43.267654   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:45.767070   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:47.767437   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:49.769265   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:52.267782   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:54.268041   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:56.268658   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:28:58.767086   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:29:00.769086   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:29:03.268422   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:29:05.767619   27386 pod_ready.go:102] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"False"
	I0108 20:29:06.271322   27386 pod_ready.go:92] pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:06.271347   27386 pod_ready.go:81] duration metric: took 37.011546033s waiting for pod "coredns-66bff467f8-v5pjk" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:06.271359   27386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-056019" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:06.281275   27386 pod_ready.go:92] pod "etcd-ingress-addon-legacy-056019" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:06.281304   27386 pod_ready.go:81] duration metric: took 9.93722ms waiting for pod "etcd-ingress-addon-legacy-056019" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:06.281316   27386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-056019" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:06.287722   27386 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-056019" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:06.287750   27386 pod_ready.go:81] duration metric: took 6.4278ms waiting for pod "kube-apiserver-ingress-addon-legacy-056019" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:06.287760   27386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-056019" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:06.294387   27386 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-056019" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:06.294441   27386 pod_ready.go:81] duration metric: took 6.670207ms waiting for pod "kube-controller-manager-ingress-addon-legacy-056019" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:06.294462   27386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mbqkx" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:06.307553   27386 pod_ready.go:92] pod "kube-proxy-mbqkx" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:06.307583   27386 pod_ready.go:81] duration metric: took 13.109786ms waiting for pod "kube-proxy-mbqkx" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:06.307596   27386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-056019" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:06.461040   27386 request.go:629] Waited for 153.375216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-056019
	I0108 20:29:06.660008   27386 request.go:629] Waited for 195.34914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes/ingress-addon-legacy-056019
	I0108 20:29:06.663527   27386 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-056019" in "kube-system" namespace has status "Ready":"True"
	I0108 20:29:06.663557   27386 pod_ready.go:81] duration metric: took 355.952369ms waiting for pod "kube-scheduler-ingress-addon-legacy-056019" in "kube-system" namespace to be "Ready" ...
	I0108 20:29:06.663572   27386 pod_ready.go:38] duration metric: took 37.413590594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:29:06.663590   27386 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:29:06.663663   27386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:29:06.679990   27386 api_server.go:72] duration metric: took 38.008803145s to wait for apiserver process to appear ...
	I0108 20:29:06.680021   27386 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:29:06.680044   27386 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I0108 20:29:06.686359   27386 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I0108 20:29:06.687719   27386 api_server.go:141] control plane version: v1.18.20
	I0108 20:29:06.687744   27386 api_server.go:131] duration metric: took 7.716147ms to wait for apiserver health ...
	I0108 20:29:06.687752   27386 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:29:06.860118   27386 request.go:629] Waited for 172.293852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I0108 20:29:06.866014   27386 system_pods.go:59] 7 kube-system pods found
	I0108 20:29:06.866043   27386 system_pods.go:61] "coredns-66bff467f8-v5pjk" [01fc36c3-e7b8-4bdd-ba78-89a9e9454ea9] Running
	I0108 20:29:06.866048   27386 system_pods.go:61] "etcd-ingress-addon-legacy-056019" [ed4f9727-7d83-4d60-8926-69b899121442] Running
	I0108 20:29:06.866052   27386 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-056019" [d08c6575-a4bd-4ead-8197-7effa13e2a84] Running
	I0108 20:29:06.866057   27386 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-056019" [88a7c67d-4d1a-4752-aad4-5b099efcdedd] Running
	I0108 20:29:06.866061   27386 system_pods.go:61] "kube-proxy-mbqkx" [a3c592df-5106-4f58-a045-104893850f63] Running
	I0108 20:29:06.866064   27386 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-056019" [849312f7-62f9-42d8-9cd2-1410612ebb8b] Running
	I0108 20:29:06.866068   27386 system_pods.go:61] "storage-provisioner" [294fbd9e-db13-4f14-aa3c-33abe6a1e5ad] Running
	I0108 20:29:06.866074   27386 system_pods.go:74] duration metric: took 178.316491ms to wait for pod list to return data ...
	I0108 20:29:06.866139   27386 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:29:07.060673   27386 request.go:629] Waited for 194.442343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:29:07.063974   27386 default_sa.go:45] found service account: "default"
	I0108 20:29:07.064000   27386 default_sa.go:55] duration metric: took 197.843324ms for default service account to be created ...
	I0108 20:29:07.064008   27386 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:29:07.260368   27386 request.go:629] Waited for 196.306816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/namespaces/kube-system/pods
	I0108 20:29:07.268166   27386 system_pods.go:86] 7 kube-system pods found
	I0108 20:29:07.268201   27386 system_pods.go:89] "coredns-66bff467f8-v5pjk" [01fc36c3-e7b8-4bdd-ba78-89a9e9454ea9] Running
	I0108 20:29:07.268207   27386 system_pods.go:89] "etcd-ingress-addon-legacy-056019" [ed4f9727-7d83-4d60-8926-69b899121442] Running
	I0108 20:29:07.268211   27386 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-056019" [d08c6575-a4bd-4ead-8197-7effa13e2a84] Running
	I0108 20:29:07.268216   27386 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-056019" [88a7c67d-4d1a-4752-aad4-5b099efcdedd] Running
	I0108 20:29:07.268219   27386 system_pods.go:89] "kube-proxy-mbqkx" [a3c592df-5106-4f58-a045-104893850f63] Running
	I0108 20:29:07.268224   27386 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-056019" [849312f7-62f9-42d8-9cd2-1410612ebb8b] Running
	I0108 20:29:07.268227   27386 system_pods.go:89] "storage-provisioner" [294fbd9e-db13-4f14-aa3c-33abe6a1e5ad] Running
	I0108 20:29:07.268234   27386 system_pods.go:126] duration metric: took 204.221056ms to wait for k8s-apps to be running ...
	I0108 20:29:07.268241   27386 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:29:07.268293   27386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:29:07.284701   27386 system_svc.go:56] duration metric: took 16.449167ms WaitForService to wait for kubelet.
	I0108 20:29:07.284742   27386 kubeadm.go:581] duration metric: took 38.613561491s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:29:07.284770   27386 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:29:07.460163   27386 request.go:629] Waited for 175.325758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.48:8443/api/v1/nodes
	I0108 20:29:07.464034   27386 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:29:07.464065   27386 node_conditions.go:123] node cpu capacity is 2
	I0108 20:29:07.464076   27386 node_conditions.go:105] duration metric: took 179.301138ms to run NodePressure ...
	I0108 20:29:07.464087   27386 start.go:228] waiting for startup goroutines ...
	I0108 20:29:07.464108   27386 start.go:233] waiting for cluster config update ...
	I0108 20:29:07.464118   27386 start.go:242] writing updated cluster config ...
	I0108 20:29:07.464388   27386 ssh_runner.go:195] Run: rm -f paused
	I0108 20:29:07.512559   27386 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0108 20:29:07.514829   27386 out.go:177] 
	W0108 20:29:07.516503   27386 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0108 20:29:07.518517   27386 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0108 20:29:07.520708   27386 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-056019" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 20:27:35 UTC, ends at Mon 2024-01-08 20:32:20 UTC. --
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.578450858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704745940578433453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=e02f4d19-bfd1-4ced-9056-76cd1182ed52 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.579701331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=25b42693-dfbc-4ca2-abfb-fc2af5b7976a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.579757136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=25b42693-dfbc-4ca2-abfb-fc2af5b7976a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.580065143Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc30d51a40d7c947bfd28f4bdaf7a3427953bb2b719ab690da4693931fdf807c,PodSandboxId:09040001ec12b896208d7709864ff3fb413c4c185febb8ec2990f779e832155f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704745927341664479,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-pdzkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8956853-fa97-4717-b1c4-2a8c38f925b8,},Annotations:map[string]string{io.kubernetes.container.hash: c042e637,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08e51a00ffc0f1a7d36696f0ea133f1100cdedd17797af3a264b192db11eba1,PodSandboxId:338cbf02af4369e0ed276c9a8a54d8121417bc10ed722d7e4cffc160e468866f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704745784427821161,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5b00b09-3d7c-4888-82a4-47b8c33733ca,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 87237e33,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a836a3dad38fcf25b157839bca18702500bf14ca74fb79a1fc797df38b5e94c7,PodSandboxId:720df2ef09b4bdf83a185d936be1251ca7cb830dc1066a3735c8b265e416af64,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704745765216486135,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hqmw6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b5270a11-edb9-41ac-a56e-f9eef62a8075,},Annotations:map[string]string{io.kubernetes.container.hash: a71e0781,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b307afe57d440288a0083014bbff699a5e557b42bbe308691e3c13761ae3d15,PodSandboxId:7b46a503f81418ce610e63eb39fb2792a09ff3fb3916d1d04462dda734a78575,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704745754701664129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zjjkb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9ae414c9-0111-4e93-93ac-ee9d3b09f886,},Annotations:map[string]string{io.kubernetes.container.hash: 8faaa1b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b25b876d594b2b651be12b78a06e737fbc9fee917619e2c6c76e048116cb628,PodSandboxId:44f0334c0697cc8f257cdee491bc51d0a045f95ba07f6309c0bb0b8ad53adb87,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704745753554455276,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hvc94,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 554b4a70-e00a-48d0-b88b-280b37ea01ea,},Annotations:map[string]string{io.kubernetes.container.hash: c3fed594,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a377d5da6e4c70566eb0f998e4914398165007e0ffc84f3b15a668217d8599b,PodSandboxId:6ea4f8dc93ee5fd9b4e023ed3b74e888437eab0831ace4437a5dff285a0591c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704745709966411577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 294fbd9e-db13-4f14-aa3c-33abe6a1e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 5410a4b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809e342641ad8d85590f81291afab3e8b0332a1b3c624e2e5652f1b67b331c5d,PodSandboxId:533beb99cedd9590e70240897646653857d60fbaa53414d21a585987a4887577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704745709579151902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c592df-5106-4f58-a045-104893850f63,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4a2eae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37785c536ddb81c59d094c8f910ebe2758077b70d133de22913b8360bf0a1121,PodSandboxId:a452cbbb24e39a7847766ed39616decf9e804cc8481637b4b4c0fa5c6a788dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704745709252212944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-v5pjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fc36c3-e7b8-4bdd-ba78-89a9e9454ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 68ec55cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec746517e90e244a784898efc6416bbc16a063e2a3c9fbbc41058ae8ef66dda,Pod
SandboxId:e833d7b8e28f1fc7b226a852c131f33a3372dddbc9bf544d353a47daee0a492c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704745684548477036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e034ab27489a1a94d17b66845335bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e60e9d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547e00c7674aa93b26e12ad5d6af1761031034799c6b1b996b2738a7bba0c961,PodSandboxId:8ad0a8d9acb749496be7cf1bccf12df0d55e
f1a3455110149b5c5cfd4246bc6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704745683680042443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e3f2ba5499aeea1ccebc66c34a1b567a4fd10efbc3a0f560d322cd357cb4702,PodSandboxId:aa7150bc9c0062ebfffab1efb7aba395dfb64773b2
c31e425f393e2eda0c52e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704745683416166345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dda451b383bd496174337eddd0a0db3baebcd8e819fe9b5778e041334abb31,PodSandboxId:e153698cc106
f5f9f0e20113f3cad0d4f9cc48ed99a23068a89651217a17c452,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704745683303102704,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b65118d80c7fb5267315ef1c348e2f,},Annotations:map[string]string{io.kubernetes.container.hash: 8eda77b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=25b42693-dfbc-4ca2-abfb-fc2af5b7976a name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.631832933Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=be1db192-e922-4628-93ce-65ecc1165951 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.631913942Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=be1db192-e922-4628-93ce-65ecc1165951 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.634885849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c0aebf96-4e13-40d0-8c44-7ffceaa3fefe name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.637593075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704745940637565035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=c0aebf96-4e13-40d0-8c44-7ffceaa3fefe name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.644593886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=849c8fcf-9e86-4e61-abf7-9c307c244f3c name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.645035352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=849c8fcf-9e86-4e61-abf7-9c307c244f3c name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.646327495Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc30d51a40d7c947bfd28f4bdaf7a3427953bb2b719ab690da4693931fdf807c,PodSandboxId:09040001ec12b896208d7709864ff3fb413c4c185febb8ec2990f779e832155f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704745927341664479,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-pdzkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8956853-fa97-4717-b1c4-2a8c38f925b8,},Annotations:map[string]string{io.kubernetes.container.hash: c042e637,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08e51a00ffc0f1a7d36696f0ea133f1100cdedd17797af3a264b192db11eba1,PodSandboxId:338cbf02af4369e0ed276c9a8a54d8121417bc10ed722d7e4cffc160e468866f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704745784427821161,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5b00b09-3d7c-4888-82a4-47b8c33733ca,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 87237e33,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a836a3dad38fcf25b157839bca18702500bf14ca74fb79a1fc797df38b5e94c7,PodSandboxId:720df2ef09b4bdf83a185d936be1251ca7cb830dc1066a3735c8b265e416af64,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704745765216486135,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hqmw6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b5270a11-edb9-41ac-a56e-f9eef62a8075,},Annotations:map[string]string{io.kubernetes.container.hash: a71e0781,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b307afe57d440288a0083014bbff699a5e557b42bbe308691e3c13761ae3d15,PodSandboxId:7b46a503f81418ce610e63eb39fb2792a09ff3fb3916d1d04462dda734a78575,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704745754701664129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zjjkb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9ae414c9-0111-4e93-93ac-ee9d3b09f886,},Annotations:map[string]string{io.kubernetes.container.hash: 8faaa1b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b25b876d594b2b651be12b78a06e737fbc9fee917619e2c6c76e048116cb628,PodSandboxId:44f0334c0697cc8f257cdee491bc51d0a045f95ba07f6309c0bb0b8ad53adb87,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704745753554455276,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hvc94,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 554b4a70-e00a-48d0-b88b-280b37ea01ea,},Annotations:map[string]string{io.kubernetes.container.hash: c3fed594,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a377d5da6e4c70566eb0f998e4914398165007e0ffc84f3b15a668217d8599b,PodSandboxId:6ea4f8dc93ee5fd9b4e023ed3b74e888437eab0831ace4437a5dff285a0591c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704745709966411577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 294fbd9e-db13-4f14-aa3c-33abe6a1e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 5410a4b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809e342641ad8d85590f81291afab3e8b0332a1b3c624e2e5652f1b67b331c5d,PodSandboxId:533beb99cedd9590e70240897646653857d60fbaa53414d21a585987a4887577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704745709579151902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c592df-5106-4f58-a045-104893850f63,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4a2eae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37785c536ddb81c59d094c8f910ebe2758077b70d133de22913b8360bf0a1121,PodSandboxId:a452cbbb24e39a7847766ed39616decf9e804cc8481637b4b4c0fa5c6a788dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704745709252212944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-v5pjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fc36c3-e7b8-4bdd-ba78-89a9e9454ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 68ec55cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec746517e90e244a784898efc6416bbc16a063e2a3c9fbbc41058ae8ef66dda,Pod
SandboxId:e833d7b8e28f1fc7b226a852c131f33a3372dddbc9bf544d353a47daee0a492c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704745684548477036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e034ab27489a1a94d17b66845335bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e60e9d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547e00c7674aa93b26e12ad5d6af1761031034799c6b1b996b2738a7bba0c961,PodSandboxId:8ad0a8d9acb749496be7cf1bccf12df0d55e
f1a3455110149b5c5cfd4246bc6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704745683680042443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e3f2ba5499aeea1ccebc66c34a1b567a4fd10efbc3a0f560d322cd357cb4702,PodSandboxId:aa7150bc9c0062ebfffab1efb7aba395dfb64773b2
c31e425f393e2eda0c52e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704745683416166345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dda451b383bd496174337eddd0a0db3baebcd8e819fe9b5778e041334abb31,PodSandboxId:e153698cc106
f5f9f0e20113f3cad0d4f9cc48ed99a23068a89651217a17c452,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704745683303102704,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b65118d80c7fb5267315ef1c348e2f,},Annotations:map[string]string{io.kubernetes.container.hash: 8eda77b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=849c8fcf-9e86-4e61-abf7-9c307c244f3c name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.701733100Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=25f1e3bf-f02f-4e7a-9c0d-03eeda225b94 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.701821249Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=25f1e3bf-f02f-4e7a-9c0d-03eeda225b94 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.704032786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cfc6bd7f-63d0-42e0-9a63-8ffb5796f69e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.704611349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704745940704592226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=cfc6bd7f-63d0-42e0-9a63-8ffb5796f69e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.705756229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=70197127-109a-498f-b053-ddbc9d28960f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.705804865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=70197127-109a-498f-b053-ddbc9d28960f name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.706041474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc30d51a40d7c947bfd28f4bdaf7a3427953bb2b719ab690da4693931fdf807c,PodSandboxId:09040001ec12b896208d7709864ff3fb413c4c185febb8ec2990f779e832155f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704745927341664479,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-pdzkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8956853-fa97-4717-b1c4-2a8c38f925b8,},Annotations:map[string]string{io.kubernetes.container.hash: c042e637,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08e51a00ffc0f1a7d36696f0ea133f1100cdedd17797af3a264b192db11eba1,PodSandboxId:338cbf02af4369e0ed276c9a8a54d8121417bc10ed722d7e4cffc160e468866f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704745784427821161,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5b00b09-3d7c-4888-82a4-47b8c33733ca,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 87237e33,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a836a3dad38fcf25b157839bca18702500bf14ca74fb79a1fc797df38b5e94c7,PodSandboxId:720df2ef09b4bdf83a185d936be1251ca7cb830dc1066a3735c8b265e416af64,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704745765216486135,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hqmw6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b5270a11-edb9-41ac-a56e-f9eef62a8075,},Annotations:map[string]string{io.kubernetes.container.hash: a71e0781,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b307afe57d440288a0083014bbff699a5e557b42bbe308691e3c13761ae3d15,PodSandboxId:7b46a503f81418ce610e63eb39fb2792a09ff3fb3916d1d04462dda734a78575,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704745754701664129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zjjkb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9ae414c9-0111-4e93-93ac-ee9d3b09f886,},Annotations:map[string]string{io.kubernetes.container.hash: 8faaa1b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b25b876d594b2b651be12b78a06e737fbc9fee917619e2c6c76e048116cb628,PodSandboxId:44f0334c0697cc8f257cdee491bc51d0a045f95ba07f6309c0bb0b8ad53adb87,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704745753554455276,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hvc94,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 554b4a70-e00a-48d0-b88b-280b37ea01ea,},Annotations:map[string]string{io.kubernetes.container.hash: c3fed594,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a377d5da6e4c70566eb0f998e4914398165007e0ffc84f3b15a668217d8599b,PodSandboxId:6ea4f8dc93ee5fd9b4e023ed3b74e888437eab0831ace4437a5dff285a0591c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704745709966411577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 294fbd9e-db13-4f14-aa3c-33abe6a1e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 5410a4b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809e342641ad8d85590f81291afab3e8b0332a1b3c624e2e5652f1b67b331c5d,PodSandboxId:533beb99cedd9590e70240897646653857d60fbaa53414d21a585987a4887577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704745709579151902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c592df-5106-4f58-a045-104893850f63,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4a2eae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37785c536ddb81c59d094c8f910ebe2758077b70d133de22913b8360bf0a1121,PodSandboxId:a452cbbb24e39a7847766ed39616decf9e804cc8481637b4b4c0fa5c6a788dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704745709252212944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-v5pjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fc36c3-e7b8-4bdd-ba78-89a9e9454ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 68ec55cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec746517e90e244a784898efc6416bbc16a063e2a3c9fbbc41058ae8ef66dda,Pod
SandboxId:e833d7b8e28f1fc7b226a852c131f33a3372dddbc9bf544d353a47daee0a492c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704745684548477036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e034ab27489a1a94d17b66845335bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e60e9d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547e00c7674aa93b26e12ad5d6af1761031034799c6b1b996b2738a7bba0c961,PodSandboxId:8ad0a8d9acb749496be7cf1bccf12df0d55e
f1a3455110149b5c5cfd4246bc6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704745683680042443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e3f2ba5499aeea1ccebc66c34a1b567a4fd10efbc3a0f560d322cd357cb4702,PodSandboxId:aa7150bc9c0062ebfffab1efb7aba395dfb64773b2
c31e425f393e2eda0c52e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704745683416166345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dda451b383bd496174337eddd0a0db3baebcd8e819fe9b5778e041334abb31,PodSandboxId:e153698cc106
f5f9f0e20113f3cad0d4f9cc48ed99a23068a89651217a17c452,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704745683303102704,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b65118d80c7fb5267315ef1c348e2f,},Annotations:map[string]string{io.kubernetes.container.hash: 8eda77b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=70197127-109a-498f-b053-ddbc9d28960f name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.747497489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=53248a18-69fb-44b3-a138-1446eaa3e301 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.747597691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=53248a18-69fb-44b3-a138-1446eaa3e301 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.749884199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5f132602-5a33-40ca-a583-1534e7120d80 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.750454113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704745940750437113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202825,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=5f132602-5a33-40ca-a583-1534e7120d80 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.751393024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ce6b4498-a864-479f-b3ad-284d740ee70b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.751523855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ce6b4498-a864-479f-b3ad-284d740ee70b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:32:20 ingress-addon-legacy-056019 crio[714]: time="2024-01-08 20:32:20.751851720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc30d51a40d7c947bfd28f4bdaf7a3427953bb2b719ab690da4693931fdf807c,PodSandboxId:09040001ec12b896208d7709864ff3fb413c4c185febb8ec2990f779e832155f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1704745927341664479,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-pdzkx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8956853-fa97-4717-b1c4-2a8c38f925b8,},Annotations:map[string]string{io.kubernetes.container.hash: c042e637,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f08e51a00ffc0f1a7d36696f0ea133f1100cdedd17797af3a264b192db11eba1,PodSandboxId:338cbf02af4369e0ed276c9a8a54d8121417bc10ed722d7e4cffc160e468866f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686,State:CONTAINER_RUNNING,CreatedAt:1704745784427821161,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5b00b09-3d7c-4888-82a4-47b8c33733ca,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 87237e33,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a836a3dad38fcf25b157839bca18702500bf14ca74fb79a1fc797df38b5e94c7,PodSandboxId:720df2ef09b4bdf83a185d936be1251ca7cb830dc1066a3735c8b265e416af64,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1704745765216486135,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-hqmw6,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b5270a11-edb9-41ac-a56e-f9eef62a8075,},Annotations:map[string]string{io.kubernetes.container.hash: a71e0781,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b307afe57d440288a0083014bbff699a5e557b42bbe308691e3c13761ae3d15,PodSandboxId:7b46a503f81418ce610e63eb39fb2792a09ff3fb3916d1d04462dda734a78575,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704745754701664129,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zjjkb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9ae414c9-0111-4e93-93ac-ee9d3b09f886,},Annotations:map[string]string{io.kubernetes.container.hash: 8faaa1b4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b25b876d594b2b651be12b78a06e737fbc9fee917619e2c6c76e048116cb628,PodSandboxId:44f0334c0697cc8f257cdee491bc51d0a045f95ba07f6309c0bb0b8ad53adb87,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1704745753554455276,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hvc94,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 554b4a70-e00a-48d0-b88b-280b37ea01ea,},Annotations:map[string]string{io.kubernetes.container.hash: c3fed594,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a377d5da6e4c70566eb0f998e4914398165007e0ffc84f3b15a668217d8599b,PodSandboxId:6ea4f8dc93ee5fd9b4e023ed3b74e888437eab0831ace4437a5dff285a0591c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704745709966411577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 294fbd9e-db13-4f14-aa3c-33abe6a1e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 5410a4b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809e342641ad8d85590f81291afab3e8b0332a1b3c624e2e5652f1b67b331c5d,PodSandboxId:533beb99cedd9590e70240897646653857d60fbaa53414d21a585987a4887577,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1704745709579151902,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbqkx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3c592df-5106-4f58-a045-104893850f63,},Annotations:map[string]string{io.kubernetes.container.hash: 2f4a2eae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37785c536ddb81c59d094c8f910ebe2758077b70d133de22913b8360bf0a1121,PodSandboxId:a452cbbb24e39a7847766ed39616decf9e804cc8481637b4b4c0fa5c6a788dd3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1704745709252212944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-v5pjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01fc36c3-e7b8-4bdd-ba78-89a9e9454ea9,},Annotations:map[string]string{io.kubernetes.container.hash: 68ec55cd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec746517e90e244a784898efc6416bbc16a063e2a3c9fbbc41058ae8ef66dda,Pod
SandboxId:e833d7b8e28f1fc7b226a852c131f33a3372dddbc9bf544d353a47daee0a492c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1704745684548477036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e034ab27489a1a94d17b66845335bd9,},Annotations:map[string]string{io.kubernetes.container.hash: e60e9d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:547e00c7674aa93b26e12ad5d6af1761031034799c6b1b996b2738a7bba0c961,PodSandboxId:8ad0a8d9acb749496be7cf1bccf12df0d55e
f1a3455110149b5c5cfd4246bc6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1704745683680042443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e3f2ba5499aeea1ccebc66c34a1b567a4fd10efbc3a0f560d322cd357cb4702,PodSandboxId:aa7150bc9c0062ebfffab1efb7aba395dfb64773b2
c31e425f393e2eda0c52e5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1704745683416166345,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dda451b383bd496174337eddd0a0db3baebcd8e819fe9b5778e041334abb31,PodSandboxId:e153698cc106
f5f9f0e20113f3cad0d4f9cc48ed99a23068a89651217a17c452,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1704745683303102704,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-056019,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b65118d80c7fb5267315ef1c348e2f,},Annotations:map[string]string{io.kubernetes.container.hash: 8eda77b1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ce6b4498-a864-479f-b3ad-284d740ee70b name=/runtime.v1.RuntimeSer
vice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fc30d51a40d7c       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            13 seconds ago      Running             hello-world-app           0                   09040001ec12b       hello-world-app-5f5d8b66bb-pdzkx
	f08e51a00ffc0       docker.io/library/nginx@sha256:2d2a2257c6e9d2e5b50d4fbeb436d8d2b55631c2a89935a425b417eb95212686                    2 minutes ago       Running             nginx                     0                   338cbf02af436       nginx
	a836a3dad38fc       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   720df2ef09b4b       ingress-nginx-controller-7fcf777cb7-hqmw6
	0b307afe57d44       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   7b46a503f8141       ingress-nginx-admission-patch-zjjkb
	5b25b876d594b       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   44f0334c0697c       ingress-nginx-admission-create-hvc94
	1a377d5da6e4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   6ea4f8dc93ee5       storage-provisioner
	809e342641ad8       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   533beb99cedd9       kube-proxy-mbqkx
	37785c536ddb8       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   a452cbbb24e39       coredns-66bff467f8-v5pjk
	7ec746517e90e       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   e833d7b8e28f1       etcd-ingress-addon-legacy-056019
	547e00c7674aa       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   8ad0a8d9acb74       kube-scheduler-ingress-addon-legacy-056019
	8e3f2ba5499ae       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   aa7150bc9c006       kube-controller-manager-ingress-addon-legacy-056019
	45dda451b383b       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   e153698cc106f       kube-apiserver-ingress-addon-legacy-056019
	
	
	==> coredns [37785c536ddb81c59d094c8f910ebe2758077b70d133de22913b8360bf0a1121] <==
	[INFO] 10.244.0.5:33469 - 29180 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000096034s
	[INFO] 10.244.0.5:38618 - 32597 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062409s
	[INFO] 10.244.0.5:33469 - 60224 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000177742s
	[INFO] 10.244.0.5:38618 - 43696 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000506s
	[INFO] 10.244.0.5:33469 - 49918 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000135268s
	[INFO] 10.244.0.5:33469 - 19259 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000123202s
	[INFO] 10.244.0.5:38618 - 16732 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000162808s
	[INFO] 10.244.0.5:38618 - 15279 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004724s
	[INFO] 10.244.0.5:33469 - 68 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000117138s
	[INFO] 10.244.0.5:38618 - 41810 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042278s
	[INFO] 10.244.0.5:38618 - 43200 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00027762s
	[INFO] 10.244.0.5:56550 - 16986 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000123451s
	[INFO] 10.244.0.5:50385 - 61130 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047996s
	[INFO] 10.244.0.5:56550 - 6473 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000212084s
	[INFO] 10.244.0.5:50385 - 27662 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000303826s
	[INFO] 10.244.0.5:50385 - 4252 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046107s
	[INFO] 10.244.0.5:56550 - 26289 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064735s
	[INFO] 10.244.0.5:50385 - 56596 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038545s
	[INFO] 10.244.0.5:56550 - 367 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059552s
	[INFO] 10.244.0.5:50385 - 63043 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038138s
	[INFO] 10.244.0.5:50385 - 40601 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029801s
	[INFO] 10.244.0.5:50385 - 10409 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034893s
	[INFO] 10.244.0.5:56550 - 12703 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068244s
	[INFO] 10.244.0.5:56550 - 12709 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000075565s
	[INFO] 10.244.0.5:56550 - 33990 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0006836s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-056019
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-056019
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=ingress-addon-legacy-056019
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_28_11_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:28:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-056019
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:32:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:32:12 +0000   Mon, 08 Jan 2024 20:28:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:32:12 +0000   Mon, 08 Jan 2024 20:28:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:32:12 +0000   Mon, 08 Jan 2024 20:28:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:32:12 +0000   Mon, 08 Jan 2024 20:28:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ingress-addon-legacy-056019
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 dd48a06b315847da80ff5cf23f120c96
	  System UUID:                dd48a06b-3158-47da-80ff-5cf23f120c96
	  Boot ID:                    2acdf191-0e9a-4c82-9c46-c36b7e80238c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-pdzkx                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-66bff467f8-v5pjk                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m53s
	  kube-system                 etcd-ingress-addon-legacy-056019                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-apiserver-ingress-addon-legacy-056019             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-056019    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-mbqkx                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-scheduler-ingress-addon-legacy-056019             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m19s (x5 over 4m20s)  kubelet     Node ingress-addon-legacy-056019 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x5 over 4m20s)  kubelet     Node ingress-addon-legacy-056019 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x5 over 4m20s)  kubelet     Node ingress-addon-legacy-056019 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet     Node ingress-addon-legacy-056019 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet     Node ingress-addon-legacy-056019 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet     Node ingress-addon-legacy-056019 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m59s                  kubelet     Node ingress-addon-legacy-056019 status is now: NodeReady
	  Normal  Starting                 3m52s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan 8 20:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.096816] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.510226] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.574122] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.148254] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.080619] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.310110] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.120086] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.140026] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.105399] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.224675] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[  +8.775694] systemd-fstab-generator[1026]: Ignoring "noauto" for root device
	[Jan 8 20:28] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.512392] systemd-fstab-generator[1413]: Ignoring "noauto" for root device
	[ +17.573592] kauditd_printk_skb: 6 callbacks suppressed
	[Jan 8 20:29] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.207101] kauditd_printk_skb: 6 callbacks suppressed
	[ +22.445616] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.077249] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [7ec746517e90e244a784898efc6416bbc16a063e2a3c9fbbc41058ae8ef66dda] <==
	2024-01-08 20:28:04.705855 W | auth: simple token is not cryptographically signed
	2024-01-08 20:28:04.710509 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-08 20:28:04.714206 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 20:28:04.714499 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-08 20:28:04.714678 I | embed: listening for peers on 192.168.39.48:2380
	2024-01-08 20:28:04.714846 I | etcdserver: 7a50af7ffd27cbe1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/08 20:28:04 INFO: 7a50af7ffd27cbe1 switched to configuration voters=(8813737435007011809)
	2024-01-08 20:28:04.715621 I | etcdserver/membership: added member 7a50af7ffd27cbe1 [https://192.168.39.48:2380] to cluster 59383b002ca7add2
	raft2024/01/08 20:28:04 INFO: 7a50af7ffd27cbe1 is starting a new election at term 1
	raft2024/01/08 20:28:04 INFO: 7a50af7ffd27cbe1 became candidate at term 2
	raft2024/01/08 20:28:04 INFO: 7a50af7ffd27cbe1 received MsgVoteResp from 7a50af7ffd27cbe1 at term 2
	raft2024/01/08 20:28:04 INFO: 7a50af7ffd27cbe1 became leader at term 2
	raft2024/01/08 20:28:04 INFO: raft.node: 7a50af7ffd27cbe1 elected leader 7a50af7ffd27cbe1 at term 2
	2024-01-08 20:28:04.897764 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-08 20:28:04.899444 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-08 20:28:04.899686 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-08 20:28:04.899771 I | etcdserver: published {Name:ingress-addon-legacy-056019 ClientURLs:[https://192.168.39.48:2379]} to cluster 59383b002ca7add2
	2024-01-08 20:28:04.907332 I | embed: ready to serve client requests
	2024-01-08 20:28:04.908730 I | embed: serving client requests on 192.168.39.48:2379
	2024-01-08 20:28:04.908962 I | embed: ready to serve client requests
	2024-01-08 20:28:04.921664 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-08 20:28:27.667757 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/service-controller\" " with result "range_response_count:1 size:203" took too long (498.986182ms) to execute
	2024-01-08 20:28:27.667992 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (110.226716ms) to execute
	2024-01-08 20:29:22.596716 W | etcdserver: read-only range request "key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" " with result "range_response_count:3 size:13723" took too long (406.250347ms) to execute
	2024-01-08 20:29:22.601699 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1107" took too long (218.660445ms) to execute
	
	
	==> kernel <==
	 20:32:21 up 4 min,  0 users,  load average: 0.95, 0.66, 0.29
	Linux ingress-addon-legacy-056019 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [45dda451b383bd496174337eddd0a0db3baebcd8e819fe9b5778e041334abb31] <==
	I0108 20:28:08.210834       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 20:28:08.210875       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 20:28:08.258456       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0108 20:28:09.104144       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0108 20:28:09.104194       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0108 20:28:09.113060       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0108 20:28:09.121224       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0108 20:28:09.121388       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0108 20:28:09.693505       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 20:28:09.749613       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0108 20:28:09.903354       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.48]
	I0108 20:28:09.904490       1 controller.go:609] quota admission added evaluator for: endpoints
	I0108 20:28:09.911355       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 20:28:10.446158       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0108 20:28:11.581224       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0108 20:28:11.665431       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0108 20:28:12.060425       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 20:28:27.668718       1 trace.go:116] Trace[1543460543]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/service-controller,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/tokens-controller,client:192.168.39.48 (started: 2024-01-08 20:28:27.168364388 +0000 UTC m=+23.682909177) (total time: 500.310807ms):
	Trace[1543460543]: [500.259279ms] [500.253473ms] About to write a response
	I0108 20:28:27.749611       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0108 20:28:28.264653       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0108 20:29:08.421435       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0108 20:29:37.113225       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0108 20:32:12.202064       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc009ecf540), encoder:(*versioning.codec)(0xc00cc34c80), buf:(*bytes.Buffer)(0xc006a60210)})
	E0108 20:32:13.162848       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	
	==> kube-controller-manager [8e3f2ba5499aeea1ccebc66c34a1b567a4fd10efbc3a0f560d322cd357cb4702] <==
	I0108 20:28:28.156012       1 shared_informer.go:230] Caches are synced for taint 
	I0108 20:28:28.156290       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0108 20:28:28.156464       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-056019. Assuming now as a timestamp.
	I0108 20:28:28.156542       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0108 20:28:28.156670       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-056019", UID:"fb9530e7-409a-43c8-93e3-45950c97a0fb", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-056019 event: Registered Node ingress-addon-legacy-056019 in Controller
	I0108 20:28:28.156792       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0108 20:28:28.196982       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 20:28:28.205210       1 shared_informer.go:230] Caches are synced for resource quota 
	I0108 20:28:28.241301       1 shared_informer.go:230] Caches are synced for disruption 
	I0108 20:28:28.241388       1 disruption.go:339] Sending events to api server.
	I0108 20:28:28.243106       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0108 20:28:28.252639       1 shared_informer.go:230] Caches are synced for deployment 
	I0108 20:28:28.278818       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"02e9a1e6-0513-4241-9fa5-f4ce6e5a51a2", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0108 20:28:28.309378       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 20:28:28.339092       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d32975bf-c351-487b-9fc1-e052ef5b6aab", APIVersion:"apps/v1", ResourceVersion:"332", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-v5pjk
	I0108 20:28:28.348469       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0108 20:28:28.348546       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0108 20:29:08.410218       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"63fc448c-53f2-4ad9-bbf8-fced498bae44", APIVersion:"apps/v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0108 20:29:08.436165       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"c5c1f4c4-dde5-435f-aacb-a889c78a6613", APIVersion:"apps/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-hqmw6
	I0108 20:29:08.519494       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"a50b2980-8874-40c3-a441-cec1bcca623e", APIVersion:"batch/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-hvc94
	I0108 20:29:08.641422       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"745a3f5d-0afc-408d-9694-dfa470881ca6", APIVersion:"batch/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-zjjkb
	I0108 20:29:14.459967       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"a50b2980-8874-40c3-a441-cec1bcca623e", APIVersion:"batch/v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 20:29:15.461882       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"745a3f5d-0afc-408d-9694-dfa470881ca6", APIVersion:"batch/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0108 20:32:02.966687       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"f6874f82-99a1-4cfe-98e7-e957978af1ee", APIVersion:"apps/v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0108 20:32:02.979007       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"88f35da3-91fa-40c7-8b19-244c6bf75442", APIVersion:"apps/v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-pdzkx
	
	
	==> kube-proxy [809e342641ad8d85590f81291afab3e8b0332a1b3c624e2e5652f1b67b331c5d] <==
	W0108 20:28:29.837788       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0108 20:28:29.847133       1 node.go:136] Successfully retrieved node IP: 192.168.39.48
	I0108 20:28:29.847224       1 server_others.go:186] Using iptables Proxier.
	I0108 20:28:29.847834       1 server.go:583] Version: v1.18.20
	I0108 20:28:29.850507       1 config.go:133] Starting endpoints config controller
	I0108 20:28:29.850643       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0108 20:28:29.850779       1 config.go:315] Starting service config controller
	I0108 20:28:29.850858       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0108 20:28:29.953902       1 shared_informer.go:230] Caches are synced for service config 
	I0108 20:28:29.953993       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [547e00c7674aa93b26e12ad5d6af1761031034799c6b1b996b2738a7bba0c961] <==
	I0108 20:28:08.231515       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 20:28:08.232173       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0108 20:28:08.232378       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0108 20:28:08.233118       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:28:08.235401       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:28:08.235529       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 20:28:08.235847       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:28:08.236965       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:28:08.237042       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:28:08.237097       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:28:08.237151       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:28:08.237199       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:28:08.237566       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:28:08.237997       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:28:08.238185       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 20:28:09.072355       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:28:09.122749       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 20:28:09.187125       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:28:09.192556       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:28:09.256750       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:28:09.259093       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:28:09.274082       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:28:09.295463       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:28:09.463789       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0108 20:28:12.232504       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 20:27:35 UTC, ends at Mon 2024-01-08 20:32:21 UTC. --
	Jan 08 20:29:26 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:29:26.806744    1420 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 08 20:29:27 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:29:27.003663    1420 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-rndpl" (UniqueName: "kubernetes.io/secret/916b843f-01fd-420f-ab65-cbad6c11aefc-minikube-ingress-dns-token-rndpl") pod "kube-ingress-dns-minikube" (UID: "916b843f-01fd-420f-ab65-cbad6c11aefc")
	Jan 08 20:29:37 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:29:37.296544    1420 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 08 20:29:37 ingress-addon-legacy-056019 kubelet[1420]: E0108 20:29:37.298639    1420 reflector.go:178] object-"default"/"default-token-wvsk8": Failed to list *v1.Secret: secrets "default-token-wvsk8" is forbidden: User "system:node:ingress-addon-legacy-056019" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "ingress-addon-legacy-056019" and this object
	Jan 08 20:29:37 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:29:37.449704    1420 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-wvsk8" (UniqueName: "kubernetes.io/secret/f5b00b09-3d7c-4888-82a4-47b8c33733ca-default-token-wvsk8") pod "nginx" (UID: "f5b00b09-3d7c-4888-82a4-47b8c33733ca")
	Jan 08 20:29:38 ingress-addon-legacy-056019 kubelet[1420]: E0108 20:29:38.550681    1420 secret.go:195] Couldn't get secret default/default-token-wvsk8: failed to sync secret cache: timed out waiting for the condition
	Jan 08 20:29:38 ingress-addon-legacy-056019 kubelet[1420]: E0108 20:29:38.550876    1420 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/f5b00b09-3d7c-4888-82a4-47b8c33733ca-default-token-wvsk8 podName:f5b00b09-3d7c-4888-82a4-47b8c33733ca nodeName:}" failed. No retries permitted until 2024-01-08 20:29:39.050838826 +0000 UTC m=+87.524960299 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"default-token-wvsk8\" (UniqueName: \"kubernetes.io/secret/f5b00b09-3d7c-4888-82a4-47b8c33733ca-default-token-wvsk8\") pod \"nginx\" (UID: \"f5b00b09-3d7c-4888-82a4-47b8c33733ca\") : failed to sync secret cache: timed out waiting for the condition"
	Jan 08 20:32:03 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:03.005684    1420 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 08 20:32:03 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:03.135797    1420 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-wvsk8" (UniqueName: "kubernetes.io/secret/b8956853-fa97-4717-b1c4-2a8c38f925b8-default-token-wvsk8") pod "hello-world-app-5f5d8b66bb-pdzkx" (UID: "b8956853-fa97-4717-b1c4-2a8c38f925b8")
	Jan 08 20:32:04 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:04.384859    1420 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b082d44039fc7234776ac81488299bf3c32954997e0f29cc593e3662554a898f
	Jan 08 20:32:04 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:04.430723    1420 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b082d44039fc7234776ac81488299bf3c32954997e0f29cc593e3662554a898f
	Jan 08 20:32:04 ingress-addon-legacy-056019 kubelet[1420]: E0108 20:32:04.431487    1420 remote_runtime.go:295] ContainerStatus "b082d44039fc7234776ac81488299bf3c32954997e0f29cc593e3662554a898f" from runtime service failed: rpc error: code = NotFound desc = could not find container "b082d44039fc7234776ac81488299bf3c32954997e0f29cc593e3662554a898f": container with ID starting with b082d44039fc7234776ac81488299bf3c32954997e0f29cc593e3662554a898f not found: ID does not exist
	Jan 08 20:32:04 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:04.541347    1420 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-rndpl" (UniqueName: "kubernetes.io/secret/916b843f-01fd-420f-ab65-cbad6c11aefc-minikube-ingress-dns-token-rndpl") pod "916b843f-01fd-420f-ab65-cbad6c11aefc" (UID: "916b843f-01fd-420f-ab65-cbad6c11aefc")
	Jan 08 20:32:04 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:04.548160    1420 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916b843f-01fd-420f-ab65-cbad6c11aefc-minikube-ingress-dns-token-rndpl" (OuterVolumeSpecName: "minikube-ingress-dns-token-rndpl") pod "916b843f-01fd-420f-ab65-cbad6c11aefc" (UID: "916b843f-01fd-420f-ab65-cbad6c11aefc"). InnerVolumeSpecName "minikube-ingress-dns-token-rndpl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:32:04 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:04.641871    1420 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-rndpl" (UniqueName: "kubernetes.io/secret/916b843f-01fd-420f-ab65-cbad6c11aefc-minikube-ingress-dns-token-rndpl") on node "ingress-addon-legacy-056019" DevicePath ""
	Jan 08 20:32:13 ingress-addon-legacy-056019 kubelet[1420]: E0108 20:32:13.142854    1420 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-hqmw6.17a87964e0fe9816", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-hqmw6", UID:"b5270a11-edb9-41ac-a56e-f9eef62a8075", APIVersion:"v1", ResourceVersion:"457", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-056019"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f34d3483cd616, ext:241612326158, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f34d3483cd616, ext:241612326158, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-hqmw6.17a87964e0fe9816" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 20:32:13 ingress-addon-legacy-056019 kubelet[1420]: E0108 20:32:13.164061    1420 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-hqmw6.17a87964e0fe9816", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-hqmw6", UID:"b5270a11-edb9-41ac-a56e-f9eef62a8075", APIVersion:"v1", ResourceVersion:"457", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-056019"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15f34d3483cd616, ext:241612326158, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15f34d3493e60ed, ext:241629204450, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-hqmw6.17a87964e0fe9816" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 08 20:32:15 ingress-addon-legacy-056019 kubelet[1420]: W0108 20:32:15.437740    1420 pod_container_deletor.go:77] Container "720df2ef09b4bdf83a185d936be1251ca7cb830dc1066a3735c8b265e416af64" not found in pod's containers
	Jan 08 20:32:17 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:17.294485    1420 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b5270a11-edb9-41ac-a56e-f9eef62a8075-webhook-cert") pod "b5270a11-edb9-41ac-a56e-f9eef62a8075" (UID: "b5270a11-edb9-41ac-a56e-f9eef62a8075")
	Jan 08 20:32:17 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:17.294566    1420 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-jv52p" (UniqueName: "kubernetes.io/secret/b5270a11-edb9-41ac-a56e-f9eef62a8075-ingress-nginx-token-jv52p") pod "b5270a11-edb9-41ac-a56e-f9eef62a8075" (UID: "b5270a11-edb9-41ac-a56e-f9eef62a8075")
	Jan 08 20:32:17 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:17.299859    1420 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5270a11-edb9-41ac-a56e-f9eef62a8075-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b5270a11-edb9-41ac-a56e-f9eef62a8075" (UID: "b5270a11-edb9-41ac-a56e-f9eef62a8075"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:32:17 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:17.300123    1420 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5270a11-edb9-41ac-a56e-f9eef62a8075-ingress-nginx-token-jv52p" (OuterVolumeSpecName: "ingress-nginx-token-jv52p") pod "b5270a11-edb9-41ac-a56e-f9eef62a8075" (UID: "b5270a11-edb9-41ac-a56e-f9eef62a8075"). InnerVolumeSpecName "ingress-nginx-token-jv52p". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 08 20:32:17 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:17.395044    1420 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b5270a11-edb9-41ac-a56e-f9eef62a8075-webhook-cert") on node "ingress-addon-legacy-056019" DevicePath ""
	Jan 08 20:32:17 ingress-addon-legacy-056019 kubelet[1420]: I0108 20:32:17.395108    1420 reconciler.go:319] Volume detached for volume "ingress-nginx-token-jv52p" (UniqueName: "kubernetes.io/secret/b5270a11-edb9-41ac-a56e-f9eef62a8075-ingress-nginx-token-jv52p") on node "ingress-addon-legacy-056019" DevicePath ""
	Jan 08 20:32:18 ingress-addon-legacy-056019 kubelet[1420]: W0108 20:32:18.178226    1420 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/b5270a11-edb9-41ac-a56e-f9eef62a8075/volumes" does not exist
	
	
	==> storage-provisioner [1a377d5da6e4c70566eb0f998e4914398165007e0ffc84f3b15a668217d8599b] <==
	I0108 20:28:30.102200       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 20:28:30.114650       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 20:28:30.114806       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 20:28:30.121945       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 20:28:30.122354       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-056019_588da172-9389-4211-a9fb-2dac0d2d3b55!
	I0108 20:28:30.123218       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f588044d-646e-4791-931e-9de00c86479d", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-056019_588da172-9389-4211-a9fb-2dac0d2d3b55 became leader
	I0108 20:28:30.222723       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-056019_588da172-9389-4211-a9fb-2dac0d2d3b55!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-056019 -n ingress-addon-legacy-056019
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-056019 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (175.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-95tbd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-95tbd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-95tbd -- sh -c "ping -c 1 192.168.39.1": exit status 1 (211.252181ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-95tbd): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-npzdk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-npzdk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-npzdk -- sh -c "ping -c 1 192.168.39.1": exit status 1 (189.574994ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-npzdk): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-340815 -n multinode-340815
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-340815 logs -n 25: (1.41431117s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-354814 ssh -- ls                    | mount-start-2-354814 | jenkins | v1.32.0 | 08 Jan 24 20:36 UTC | 08 Jan 24 20:36 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-354814 ssh --                       | mount-start-2-354814 | jenkins | v1.32.0 | 08 Jan 24 20:36 UTC | 08 Jan 24 20:36 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-354814                           | mount-start-2-354814 | jenkins | v1.32.0 | 08 Jan 24 20:36 UTC | 08 Jan 24 20:36 UTC |
	| start   | -p mount-start-2-354814                           | mount-start-2-354814 | jenkins | v1.32.0 | 08 Jan 24 20:36 UTC | 08 Jan 24 20:37 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-354814 | jenkins | v1.32.0 | 08 Jan 24 20:37 UTC |                     |
	|         | --profile mount-start-2-354814                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-354814 ssh -- ls                    | mount-start-2-354814 | jenkins | v1.32.0 | 08 Jan 24 20:37 UTC | 08 Jan 24 20:37 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-354814 ssh --                       | mount-start-2-354814 | jenkins | v1.32.0 | 08 Jan 24 20:37 UTC | 08 Jan 24 20:37 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-354814                           | mount-start-2-354814 | jenkins | v1.32.0 | 08 Jan 24 20:37 UTC | 08 Jan 24 20:37 UTC |
	| delete  | -p mount-start-1-340632                           | mount-start-1-340632 | jenkins | v1.32.0 | 08 Jan 24 20:37 UTC | 08 Jan 24 20:37 UTC |
	| start   | -p multinode-340815                               | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:37 UTC | 08 Jan 24 20:40 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- apply -f                   | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- rollout                    | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- get pods -o                | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- get pods -o                | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- exec                       | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | busybox-5bc68d56bd-95tbd --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- exec                       | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | busybox-5bc68d56bd-npzdk --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- exec                       | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | busybox-5bc68d56bd-95tbd --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- exec                       | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | busybox-5bc68d56bd-npzdk --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- exec                       | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | busybox-5bc68d56bd-95tbd -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- exec                       | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | busybox-5bc68d56bd-npzdk -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- get pods -o                | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- exec                       | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | busybox-5bc68d56bd-95tbd                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- exec                       | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC |                     |
	|         | busybox-5bc68d56bd-95tbd -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- exec                       | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC | 08 Jan 24 20:40 UTC |
	|         | busybox-5bc68d56bd-npzdk                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-340815 -- exec                       | multinode-340815     | jenkins | v1.32.0 | 08 Jan 24 20:40 UTC |                     |
	|         | busybox-5bc68d56bd-npzdk -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:37:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:37:20.738526   31613 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:37:20.738654   31613 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:37:20.738665   31613 out.go:309] Setting ErrFile to fd 2...
	I0108 20:37:20.738672   31613 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:37:20.738882   31613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 20:37:20.739447   31613 out.go:303] Setting JSON to false
	I0108 20:37:20.740323   31613 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4765,"bootTime":1704741476,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:37:20.740392   31613 start.go:138] virtualization: kvm guest
	I0108 20:37:20.743244   31613 out.go:177] * [multinode-340815] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:37:20.744862   31613 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:37:20.746474   31613 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:37:20.744822   31613 notify.go:220] Checking for updates...
	I0108 20:37:20.748270   31613 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:37:20.750029   31613 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:37:20.751740   31613 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:37:20.753464   31613 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:37:20.755441   31613 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:37:20.791634   31613 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 20:37:20.793137   31613 start.go:298] selected driver: kvm2
	I0108 20:37:20.793155   31613 start.go:902] validating driver "kvm2" against <nil>
	I0108 20:37:20.793167   31613 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:37:20.793937   31613 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:37:20.794006   31613 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 20:37:20.808911   31613 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 20:37:20.808977   31613 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:37:20.809177   31613 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:37:20.809226   31613 cni.go:84] Creating CNI manager for ""
	I0108 20:37:20.809238   31613 cni.go:136] 0 nodes found, recommending kindnet
	I0108 20:37:20.809250   31613 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 20:37:20.809259   31613 start_flags.go:323] config:
	{Name:multinode-340815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:37:20.809369   31613 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:37:20.811757   31613 out.go:177] * Starting control plane node multinode-340815 in cluster multinode-340815
	I0108 20:37:20.813268   31613 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:37:20.813309   31613 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 20:37:20.813320   31613 cache.go:56] Caching tarball of preloaded images
	I0108 20:37:20.813413   31613 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 20:37:20.813424   31613 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:37:20.813734   31613 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json ...
	I0108 20:37:20.813754   31613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json: {Name:mk3573eab33d97691293a0ea9ea9297e56f4c071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:37:20.813900   31613 start.go:365] acquiring machines lock for multinode-340815: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 20:37:20.813932   31613 start.go:369] acquired machines lock for "multinode-340815" in 18.962µs
	I0108 20:37:20.813948   31613 start.go:93] Provisioning new machine with config: &{Name:multinode-340815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:37:20.813996   31613 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 20:37:20.815917   31613 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 20:37:20.816034   31613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:37:20.816067   31613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:37:20.830280   31613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36215
	I0108 20:37:20.830709   31613 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:37:20.831254   31613 main.go:141] libmachine: Using API Version  1
	I0108 20:37:20.831273   31613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:37:20.831667   31613 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:37:20.831865   31613 main.go:141] libmachine: (multinode-340815) Calling .GetMachineName
	I0108 20:37:20.832039   31613 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:37:20.832235   31613 start.go:159] libmachine.API.Create for "multinode-340815" (driver="kvm2")
	I0108 20:37:20.832265   31613 client.go:168] LocalClient.Create starting
	I0108 20:37:20.832301   31613 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem
	I0108 20:37:20.832344   31613 main.go:141] libmachine: Decoding PEM data...
	I0108 20:37:20.832361   31613 main.go:141] libmachine: Parsing certificate...
	I0108 20:37:20.832411   31613 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem
	I0108 20:37:20.832431   31613 main.go:141] libmachine: Decoding PEM data...
	I0108 20:37:20.832441   31613 main.go:141] libmachine: Parsing certificate...
	I0108 20:37:20.832454   31613 main.go:141] libmachine: Running pre-create checks...
	I0108 20:37:20.832462   31613 main.go:141] libmachine: (multinode-340815) Calling .PreCreateCheck
	I0108 20:37:20.832800   31613 main.go:141] libmachine: (multinode-340815) Calling .GetConfigRaw
	I0108 20:37:20.833158   31613 main.go:141] libmachine: Creating machine...
	I0108 20:37:20.833172   31613 main.go:141] libmachine: (multinode-340815) Calling .Create
	I0108 20:37:20.833307   31613 main.go:141] libmachine: (multinode-340815) Creating KVM machine...
	I0108 20:37:20.834509   31613 main.go:141] libmachine: (multinode-340815) DBG | found existing default KVM network
	I0108 20:37:20.835212   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:20.835074   31636 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a40}
	I0108 20:37:20.840796   31613 main.go:141] libmachine: (multinode-340815) DBG | trying to create private KVM network mk-multinode-340815 192.168.39.0/24...
	I0108 20:37:20.912078   31613 main.go:141] libmachine: (multinode-340815) DBG | private KVM network mk-multinode-340815 192.168.39.0/24 created
	I0108 20:37:20.912184   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:20.912045   31636 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:37:20.912222   31613 main.go:141] libmachine: (multinode-340815) Setting up store path in /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815 ...
	I0108 20:37:20.912242   31613 main.go:141] libmachine: (multinode-340815) Building disk image from file:///home/jenkins/minikube-integration/17907-10702/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 20:37:20.912262   31613 main.go:141] libmachine: (multinode-340815) Downloading /home/jenkins/minikube-integration/17907-10702/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17907-10702/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 20:37:21.118082   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:21.117869   31636 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa...
	I0108 20:37:21.303099   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:21.302923   31636 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/multinode-340815.rawdisk...
	I0108 20:37:21.303139   31613 main.go:141] libmachine: (multinode-340815) DBG | Writing magic tar header
	I0108 20:37:21.303179   31613 main.go:141] libmachine: (multinode-340815) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815 (perms=drwx------)
	I0108 20:37:21.303212   31613 main.go:141] libmachine: (multinode-340815) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube/machines (perms=drwxr-xr-x)
	I0108 20:37:21.303241   31613 main.go:141] libmachine: (multinode-340815) DBG | Writing SSH key tar header
	I0108 20:37:21.303258   31613 main.go:141] libmachine: (multinode-340815) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube (perms=drwxr-xr-x)
	I0108 20:37:21.303268   31613 main.go:141] libmachine: (multinode-340815) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702 (perms=drwxrwxr-x)
	I0108 20:37:21.303278   31613 main.go:141] libmachine: (multinode-340815) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 20:37:21.303287   31613 main.go:141] libmachine: (multinode-340815) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 20:37:21.303298   31613 main.go:141] libmachine: (multinode-340815) Creating domain...
	I0108 20:37:21.303321   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:21.303048   31636 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815 ...
	I0108 20:37:21.303343   31613 main.go:141] libmachine: (multinode-340815) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815
	I0108 20:37:21.303357   31613 main.go:141] libmachine: (multinode-340815) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube/machines
	I0108 20:37:21.303364   31613 main.go:141] libmachine: (multinode-340815) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:37:21.303372   31613 main.go:141] libmachine: (multinode-340815) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702
	I0108 20:37:21.303378   31613 main.go:141] libmachine: (multinode-340815) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 20:37:21.303385   31613 main.go:141] libmachine: (multinode-340815) DBG | Checking permissions on dir: /home/jenkins
	I0108 20:37:21.303395   31613 main.go:141] libmachine: (multinode-340815) DBG | Checking permissions on dir: /home
	I0108 20:37:21.303406   31613 main.go:141] libmachine: (multinode-340815) DBG | Skipping /home - not owner
	I0108 20:37:21.304455   31613 main.go:141] libmachine: (multinode-340815) define libvirt domain using xml: 
	I0108 20:37:21.304488   31613 main.go:141] libmachine: (multinode-340815) <domain type='kvm'>
	I0108 20:37:21.304501   31613 main.go:141] libmachine: (multinode-340815)   <name>multinode-340815</name>
	I0108 20:37:21.304515   31613 main.go:141] libmachine: (multinode-340815)   <memory unit='MiB'>2200</memory>
	I0108 20:37:21.304531   31613 main.go:141] libmachine: (multinode-340815)   <vcpu>2</vcpu>
	I0108 20:37:21.304543   31613 main.go:141] libmachine: (multinode-340815)   <features>
	I0108 20:37:21.304557   31613 main.go:141] libmachine: (multinode-340815)     <acpi/>
	I0108 20:37:21.304574   31613 main.go:141] libmachine: (multinode-340815)     <apic/>
	I0108 20:37:21.304588   31613 main.go:141] libmachine: (multinode-340815)     <pae/>
	I0108 20:37:21.304600   31613 main.go:141] libmachine: (multinode-340815)     
	I0108 20:37:21.304619   31613 main.go:141] libmachine: (multinode-340815)   </features>
	I0108 20:37:21.304636   31613 main.go:141] libmachine: (multinode-340815)   <cpu mode='host-passthrough'>
	I0108 20:37:21.304648   31613 main.go:141] libmachine: (multinode-340815)   
	I0108 20:37:21.304662   31613 main.go:141] libmachine: (multinode-340815)   </cpu>
	I0108 20:37:21.304683   31613 main.go:141] libmachine: (multinode-340815)   <os>
	I0108 20:37:21.304704   31613 main.go:141] libmachine: (multinode-340815)     <type>hvm</type>
	I0108 20:37:21.304721   31613 main.go:141] libmachine: (multinode-340815)     <boot dev='cdrom'/>
	I0108 20:37:21.304735   31613 main.go:141] libmachine: (multinode-340815)     <boot dev='hd'/>
	I0108 20:37:21.304747   31613 main.go:141] libmachine: (multinode-340815)     <bootmenu enable='no'/>
	I0108 20:37:21.304761   31613 main.go:141] libmachine: (multinode-340815)   </os>
	I0108 20:37:21.304772   31613 main.go:141] libmachine: (multinode-340815)   <devices>
	I0108 20:37:21.304787   31613 main.go:141] libmachine: (multinode-340815)     <disk type='file' device='cdrom'>
	I0108 20:37:21.304807   31613 main.go:141] libmachine: (multinode-340815)       <source file='/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/boot2docker.iso'/>
	I0108 20:37:21.304822   31613 main.go:141] libmachine: (multinode-340815)       <target dev='hdc' bus='scsi'/>
	I0108 20:37:21.304832   31613 main.go:141] libmachine: (multinode-340815)       <readonly/>
	I0108 20:37:21.304841   31613 main.go:141] libmachine: (multinode-340815)     </disk>
	I0108 20:37:21.304854   31613 main.go:141] libmachine: (multinode-340815)     <disk type='file' device='disk'>
	I0108 20:37:21.304870   31613 main.go:141] libmachine: (multinode-340815)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 20:37:21.304887   31613 main.go:141] libmachine: (multinode-340815)       <source file='/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/multinode-340815.rawdisk'/>
	I0108 20:37:21.304897   31613 main.go:141] libmachine: (multinode-340815)       <target dev='hda' bus='virtio'/>
	I0108 20:37:21.304939   31613 main.go:141] libmachine: (multinode-340815)     </disk>
	I0108 20:37:21.304961   31613 main.go:141] libmachine: (multinode-340815)     <interface type='network'>
	I0108 20:37:21.304972   31613 main.go:141] libmachine: (multinode-340815)       <source network='mk-multinode-340815'/>
	I0108 20:37:21.304980   31613 main.go:141] libmachine: (multinode-340815)       <model type='virtio'/>
	I0108 20:37:21.304986   31613 main.go:141] libmachine: (multinode-340815)     </interface>
	I0108 20:37:21.304995   31613 main.go:141] libmachine: (multinode-340815)     <interface type='network'>
	I0108 20:37:21.305001   31613 main.go:141] libmachine: (multinode-340815)       <source network='default'/>
	I0108 20:37:21.305009   31613 main.go:141] libmachine: (multinode-340815)       <model type='virtio'/>
	I0108 20:37:21.305026   31613 main.go:141] libmachine: (multinode-340815)     </interface>
	I0108 20:37:21.305037   31613 main.go:141] libmachine: (multinode-340815)     <serial type='pty'>
	I0108 20:37:21.305045   31613 main.go:141] libmachine: (multinode-340815)       <target port='0'/>
	I0108 20:37:21.305052   31613 main.go:141] libmachine: (multinode-340815)     </serial>
	I0108 20:37:21.305059   31613 main.go:141] libmachine: (multinode-340815)     <console type='pty'>
	I0108 20:37:21.305067   31613 main.go:141] libmachine: (multinode-340815)       <target type='serial' port='0'/>
	I0108 20:37:21.305072   31613 main.go:141] libmachine: (multinode-340815)     </console>
	I0108 20:37:21.305080   31613 main.go:141] libmachine: (multinode-340815)     <rng model='virtio'>
	I0108 20:37:21.305088   31613 main.go:141] libmachine: (multinode-340815)       <backend model='random'>/dev/random</backend>
	I0108 20:37:21.305095   31613 main.go:141] libmachine: (multinode-340815)     </rng>
	I0108 20:37:21.305113   31613 main.go:141] libmachine: (multinode-340815)     
	I0108 20:37:21.305141   31613 main.go:141] libmachine: (multinode-340815)     
	I0108 20:37:21.305156   31613 main.go:141] libmachine: (multinode-340815)   </devices>
	I0108 20:37:21.305170   31613 main.go:141] libmachine: (multinode-340815) </domain>
	I0108 20:37:21.305187   31613 main.go:141] libmachine: (multinode-340815) 
	I0108 20:37:21.309787   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:92:c4:c9 in network default
	I0108 20:37:21.310338   31613 main.go:141] libmachine: (multinode-340815) Ensuring networks are active...
	I0108 20:37:21.310361   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:21.311153   31613 main.go:141] libmachine: (multinode-340815) Ensuring network default is active
	I0108 20:37:21.311528   31613 main.go:141] libmachine: (multinode-340815) Ensuring network mk-multinode-340815 is active
	I0108 20:37:21.312147   31613 main.go:141] libmachine: (multinode-340815) Getting domain xml...
	I0108 20:37:21.312946   31613 main.go:141] libmachine: (multinode-340815) Creating domain...
	I0108 20:37:22.561472   31613 main.go:141] libmachine: (multinode-340815) Waiting to get IP...
	I0108 20:37:22.562440   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:22.563068   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:22.563113   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:22.563044   31636 retry.go:31] will retry after 191.790609ms: waiting for machine to come up
	I0108 20:37:22.756445   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:22.756908   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:22.756938   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:22.756831   31636 retry.go:31] will retry after 315.24821ms: waiting for machine to come up
	I0108 20:37:23.073333   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:23.073749   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:23.073780   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:23.073703   31636 retry.go:31] will retry after 358.748977ms: waiting for machine to come up
	I0108 20:37:23.434215   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:23.434765   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:23.434794   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:23.434725   31636 retry.go:31] will retry after 517.105693ms: waiting for machine to come up
	I0108 20:37:23.953217   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:23.953666   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:23.953710   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:23.953617   31636 retry.go:31] will retry after 754.433206ms: waiting for machine to come up
	I0108 20:37:24.709626   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:24.709954   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:24.709980   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:24.709920   31636 retry.go:31] will retry after 863.056713ms: waiting for machine to come up
	I0108 20:37:25.575134   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:25.575511   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:25.575551   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:25.575468   31636 retry.go:31] will retry after 763.369048ms: waiting for machine to come up
	I0108 20:37:26.339987   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:26.340455   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:26.340486   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:26.340399   31636 retry.go:31] will retry after 1.006437965s: waiting for machine to come up
	I0108 20:37:27.348204   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:27.348644   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:27.348668   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:27.348583   31636 retry.go:31] will retry after 1.564055066s: waiting for machine to come up
	I0108 20:37:28.915484   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:28.916051   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:28.916087   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:28.915978   31636 retry.go:31] will retry after 1.667589456s: waiting for machine to come up
	I0108 20:37:30.585305   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:30.585739   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:30.585765   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:30.585689   31636 retry.go:31] will retry after 1.92568373s: waiting for machine to come up
	I0108 20:37:32.514004   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:32.514427   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:32.514455   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:32.514374   31636 retry.go:31] will retry after 2.54346656s: waiting for machine to come up
	I0108 20:37:35.060957   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:35.061430   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:35.061455   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:35.061393   31636 retry.go:31] will retry after 4.257623651s: waiting for machine to come up
	I0108 20:37:39.323263   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:39.323693   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:37:39.323720   31613 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:37:39.323653   31636 retry.go:31] will retry after 4.664922428s: waiting for machine to come up
	I0108 20:37:43.992289   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:43.992639   31613 main.go:141] libmachine: (multinode-340815) Found IP for machine: 192.168.39.196
	I0108 20:37:43.992675   31613 main.go:141] libmachine: (multinode-340815) Reserving static IP address...
	I0108 20:37:43.992692   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has current primary IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:43.993071   31613 main.go:141] libmachine: (multinode-340815) DBG | unable to find host DHCP lease matching {name: "multinode-340815", mac: "52:54:00:06:a0:1e", ip: "192.168.39.196"} in network mk-multinode-340815
	I0108 20:37:44.066310   31613 main.go:141] libmachine: (multinode-340815) DBG | Getting to WaitForSSH function...
	I0108 20:37:44.066341   31613 main.go:141] libmachine: (multinode-340815) Reserved static IP address: 192.168.39.196
	I0108 20:37:44.066354   31613 main.go:141] libmachine: (multinode-340815) Waiting for SSH to be available...
	I0108 20:37:44.069099   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.069582   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:minikube Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:44.069621   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.069751   31613 main.go:141] libmachine: (multinode-340815) DBG | Using SSH client type: external
	I0108 20:37:44.069782   31613 main.go:141] libmachine: (multinode-340815) DBG | Using SSH private key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa (-rw-------)
	I0108 20:37:44.069824   31613 main.go:141] libmachine: (multinode-340815) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 20:37:44.069843   31613 main.go:141] libmachine: (multinode-340815) DBG | About to run SSH command:
	I0108 20:37:44.069857   31613 main.go:141] libmachine: (multinode-340815) DBG | exit 0
	I0108 20:37:44.160258   31613 main.go:141] libmachine: (multinode-340815) DBG | SSH cmd err, output: <nil>: 
	I0108 20:37:44.160542   31613 main.go:141] libmachine: (multinode-340815) KVM machine creation complete!
	I0108 20:37:44.160888   31613 main.go:141] libmachine: (multinode-340815) Calling .GetConfigRaw
	I0108 20:37:44.161458   31613 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:37:44.161672   31613 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:37:44.161838   31613 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 20:37:44.161854   31613 main.go:141] libmachine: (multinode-340815) Calling .GetState
	I0108 20:37:44.163229   31613 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 20:37:44.163246   31613 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 20:37:44.163253   31613 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 20:37:44.163263   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:37:44.165465   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.165817   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:44.165854   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.165981   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:37:44.166178   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:44.166339   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:44.166474   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:37:44.166644   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:37:44.166998   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0108 20:37:44.167014   31613 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 20:37:44.283338   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:37:44.283376   31613 main.go:141] libmachine: Detecting the provisioner...
	I0108 20:37:44.283388   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:37:44.286164   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.286452   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:44.286484   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.286600   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:37:44.286830   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:44.287006   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:44.287135   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:37:44.287301   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:37:44.287609   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0108 20:37:44.287620   31613 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 20:37:44.404942   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 20:37:44.405027   31613 main.go:141] libmachine: found compatible host: buildroot
	I0108 20:37:44.405040   31613 main.go:141] libmachine: Provisioning with buildroot...
	I0108 20:37:44.405049   31613 main.go:141] libmachine: (multinode-340815) Calling .GetMachineName
	I0108 20:37:44.405276   31613 buildroot.go:166] provisioning hostname "multinode-340815"
	I0108 20:37:44.405299   31613 main.go:141] libmachine: (multinode-340815) Calling .GetMachineName
	I0108 20:37:44.405458   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:37:44.407798   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.408146   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:44.408175   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.408314   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:37:44.408508   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:44.408683   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:44.408841   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:37:44.409027   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:37:44.409378   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0108 20:37:44.409393   31613 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-340815 && echo "multinode-340815" | sudo tee /etc/hostname
	I0108 20:37:44.536880   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-340815
	
	I0108 20:37:44.536914   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:37:44.539161   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.539557   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:44.539589   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.539765   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:37:44.539938   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:44.540112   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:44.540257   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:37:44.540415   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:37:44.540726   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0108 20:37:44.540743   31613 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-340815' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-340815/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-340815' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:37:44.664795   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:37:44.664830   31613 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 20:37:44.664887   31613 buildroot.go:174] setting up certificates
	I0108 20:37:44.664897   31613 provision.go:83] configureAuth start
	I0108 20:37:44.664912   31613 main.go:141] libmachine: (multinode-340815) Calling .GetMachineName
	I0108 20:37:44.665188   31613 main.go:141] libmachine: (multinode-340815) Calling .GetIP
	I0108 20:37:44.667719   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.668107   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:44.668135   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.668314   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:37:44.670608   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.670970   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:44.671001   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.671171   31613 provision.go:138] copyHostCerts
	I0108 20:37:44.671223   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:37:44.671279   31613 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 20:37:44.671290   31613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:37:44.671353   31613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 20:37:44.671465   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:37:44.671493   31613 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 20:37:44.671504   31613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:37:44.671541   31613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 20:37:44.671618   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:37:44.671641   31613 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 20:37:44.671650   31613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:37:44.671675   31613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 20:37:44.671809   31613 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.multinode-340815 san=[192.168.39.196 192.168.39.196 localhost 127.0.0.1 minikube multinode-340815]
	I0108 20:37:44.755130   31613 provision.go:172] copyRemoteCerts
	I0108 20:37:44.755188   31613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:37:44.755209   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:37:44.757661   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.758031   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:44.758061   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.758224   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:37:44.758419   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:44.758561   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:37:44.758735   31613 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:37:44.845326   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:37:44.845400   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:37:44.868233   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:37:44.868319   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 20:37:44.890429   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:37:44.890491   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:37:44.912689   31613 provision.go:86] duration metric: configureAuth took 247.776533ms
	I0108 20:37:44.912715   31613 buildroot.go:189] setting minikube options for container-runtime
	I0108 20:37:44.912890   31613 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:37:44.912958   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:37:44.915256   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.915550   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:44.915596   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:44.915722   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:37:44.915931   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:44.916106   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:44.916212   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:37:44.916378   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:37:44.916804   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0108 20:37:44.916823   31613 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:37:45.225660   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:37:45.225725   31613 main.go:141] libmachine: Checking connection to Docker...
	I0108 20:37:45.225742   31613 main.go:141] libmachine: (multinode-340815) Calling .GetURL
	I0108 20:37:45.227065   31613 main.go:141] libmachine: (multinode-340815) DBG | Using libvirt version 6000000
	I0108 20:37:45.229312   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.229637   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:45.229671   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.229830   31613 main.go:141] libmachine: Docker is up and running!
	I0108 20:37:45.229845   31613 main.go:141] libmachine: Reticulating splines...
	I0108 20:37:45.229853   31613 client.go:171] LocalClient.Create took 24.397581051s
	I0108 20:37:45.229879   31613 start.go:167] duration metric: libmachine.API.Create for "multinode-340815" took 24.397642905s
	I0108 20:37:45.229891   31613 start.go:300] post-start starting for "multinode-340815" (driver="kvm2")
	I0108 20:37:45.229906   31613 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:37:45.229930   31613 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:37:45.230150   31613 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:37:45.230171   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:37:45.232227   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.232498   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:45.232525   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.232664   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:37:45.232872   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:45.233033   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:37:45.233175   31613 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:37:45.323032   31613 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:37:45.327848   31613 command_runner.go:130] > NAME=Buildroot
	I0108 20:37:45.327879   31613 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 20:37:45.327886   31613 command_runner.go:130] > ID=buildroot
	I0108 20:37:45.327895   31613 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 20:37:45.327908   31613 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 20:37:45.327980   31613 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 20:37:45.327997   31613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 20:37:45.328058   31613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 20:37:45.328166   31613 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 20:37:45.328177   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /etc/ssl/certs/178962.pem
	I0108 20:37:45.328266   31613 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:37:45.338348   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:37:45.361390   31613 start.go:303] post-start completed in 131.481317ms
	I0108 20:37:45.361446   31613 main.go:141] libmachine: (multinode-340815) Calling .GetConfigRaw
	I0108 20:37:45.362075   31613 main.go:141] libmachine: (multinode-340815) Calling .GetIP
	I0108 20:37:45.364577   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.364936   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:45.364974   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.365182   31613 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json ...
	I0108 20:37:45.365341   31613 start.go:128] duration metric: createHost completed in 24.551335847s
	I0108 20:37:45.365366   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:37:45.367269   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.367573   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:45.367601   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.367704   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:37:45.367874   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:45.368033   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:45.368140   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:37:45.368278   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:37:45.368592   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0108 20:37:45.368604   31613 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 20:37:45.484883   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704746265.455455334
	
	I0108 20:37:45.484909   31613 fix.go:206] guest clock: 1704746265.455455334
	I0108 20:37:45.484920   31613 fix.go:219] Guest: 2024-01-08 20:37:45.455455334 +0000 UTC Remote: 2024-01-08 20:37:45.365351543 +0000 UTC m=+24.677807294 (delta=90.103791ms)
	I0108 20:37:45.484949   31613 fix.go:190] guest clock delta is within tolerance: 90.103791ms
	I0108 20:37:45.484960   31613 start.go:83] releasing machines lock for "multinode-340815", held for 24.67101834s
	I0108 20:37:45.484983   31613 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:37:45.485263   31613 main.go:141] libmachine: (multinode-340815) Calling .GetIP
	I0108 20:37:45.488003   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.488380   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:45.488408   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.488542   31613 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:37:45.489029   31613 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:37:45.489210   31613 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:37:45.489310   31613 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:37:45.489348   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:37:45.489420   31613 ssh_runner.go:195] Run: cat /version.json
	I0108 20:37:45.489442   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:37:45.491825   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.491913   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.492196   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:45.492224   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.492295   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:45.492328   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:37:45.492330   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:45.492489   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:45.492491   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:37:45.492675   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:37:45.492684   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:37:45.492842   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:37:45.492857   31613 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:37:45.492971   31613 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:37:45.577095   31613 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I0108 20:37:45.577668   31613 ssh_runner.go:195] Run: systemctl --version
	I0108 20:37:45.599759   31613 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 20:37:45.599827   31613 command_runner.go:130] > systemd 247 (247)
	I0108 20:37:45.599844   31613 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0108 20:37:45.599907   31613 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:37:45.759061   31613 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:37:45.764774   31613 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 20:37:45.764821   31613 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 20:37:45.764874   31613 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:37:45.781296   31613 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 20:37:45.781385   31613 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 20:37:45.781395   31613 start.go:475] detecting cgroup driver to use...
	I0108 20:37:45.781454   31613 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:37:45.796006   31613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:37:45.809533   31613 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:37:45.809603   31613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:37:45.822938   31613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:37:45.836734   31613 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:37:45.947016   31613 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0108 20:37:45.947109   31613 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:37:45.959809   31613 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 20:37:46.057561   31613 docker.go:233] disabling docker service ...
	I0108 20:37:46.057632   31613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:37:46.070422   31613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:37:46.081559   31613 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0108 20:37:46.081994   31613 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:37:46.184574   31613 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 20:37:46.184654   31613 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:37:46.284595   31613 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0108 20:37:46.284631   31613 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 20:37:46.284703   31613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:37:46.296439   31613 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:37:46.313850   31613 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 20:37:46.313892   31613 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:37:46.313944   31613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:37:46.323157   31613 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:37:46.323220   31613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:37:46.332374   31613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:37:46.341508   31613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:37:46.351266   31613 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:37:46.361258   31613 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:37:46.369863   31613 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 20:37:46.370038   31613 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 20:37:46.370112   31613 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 20:37:46.383305   31613 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:37:46.392077   31613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:37:46.499142   31613 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:37:46.667712   31613 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:37:46.667793   31613 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:37:46.676306   31613 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 20:37:46.676329   31613 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 20:37:46.676336   31613 command_runner.go:130] > Device: 16h/22d	Inode: 767         Links: 1
	I0108 20:37:46.676343   31613 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:37:46.676347   31613 command_runner.go:130] > Access: 2024-01-08 20:37:46.626292822 +0000
	I0108 20:37:46.676353   31613 command_runner.go:130] > Modify: 2024-01-08 20:37:46.626292822 +0000
	I0108 20:37:46.676357   31613 command_runner.go:130] > Change: 2024-01-08 20:37:46.626292822 +0000
	I0108 20:37:46.676361   31613 command_runner.go:130] >  Birth: -
	I0108 20:37:46.676466   31613 start.go:543] Will wait 60s for crictl version
	I0108 20:37:46.676537   31613 ssh_runner.go:195] Run: which crictl
	I0108 20:37:46.680702   31613 command_runner.go:130] > /usr/bin/crictl
	I0108 20:37:46.680783   31613 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:37:46.720122   31613 command_runner.go:130] > Version:  0.1.0
	I0108 20:37:46.720144   31613 command_runner.go:130] > RuntimeName:  cri-o
	I0108 20:37:46.720149   31613 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 20:37:46.720154   31613 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 20:37:46.720227   31613 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 20:37:46.720325   31613 ssh_runner.go:195] Run: crio --version
	I0108 20:37:46.765140   31613 command_runner.go:130] > crio version 1.24.1
	I0108 20:37:46.765163   31613 command_runner.go:130] > Version:          1.24.1
	I0108 20:37:46.765169   31613 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 20:37:46.765173   31613 command_runner.go:130] > GitTreeState:     dirty
	I0108 20:37:46.765179   31613 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 20:37:46.765184   31613 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 20:37:46.765188   31613 command_runner.go:130] > Compiler:         gc
	I0108 20:37:46.765192   31613 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:37:46.765202   31613 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:37:46.765209   31613 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:37:46.765213   31613 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:37:46.765217   31613 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:37:46.766520   31613 ssh_runner.go:195] Run: crio --version
	I0108 20:37:46.814929   31613 command_runner.go:130] > crio version 1.24.1
	I0108 20:37:46.814962   31613 command_runner.go:130] > Version:          1.24.1
	I0108 20:37:46.814973   31613 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 20:37:46.814979   31613 command_runner.go:130] > GitTreeState:     dirty
	I0108 20:37:46.814984   31613 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 20:37:46.814989   31613 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 20:37:46.814993   31613 command_runner.go:130] > Compiler:         gc
	I0108 20:37:46.814997   31613 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:37:46.815008   31613 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:37:46.815015   31613 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:37:46.815024   31613 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:37:46.815031   31613 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:37:46.817016   31613 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 20:37:46.818471   31613 main.go:141] libmachine: (multinode-340815) Calling .GetIP
	I0108 20:37:46.821059   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:46.821355   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:37:46.821382   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:37:46.821583   31613 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 20:37:46.825809   31613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:37:46.839167   31613 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:37:46.839232   31613 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:37:46.875852   31613 command_runner.go:130] > {
	I0108 20:37:46.875880   31613 command_runner.go:130] >   "images": [
	I0108 20:37:46.875887   31613 command_runner.go:130] >   ]
	I0108 20:37:46.875892   31613 command_runner.go:130] > }
	I0108 20:37:46.875998   31613 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 20:37:46.876054   31613 ssh_runner.go:195] Run: which lz4
	I0108 20:37:46.880411   31613 command_runner.go:130] > /usr/bin/lz4
	I0108 20:37:46.880454   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 20:37:46.880539   31613 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 20:37:46.885084   31613 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 20:37:46.885121   31613 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 20:37:46.885142   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 20:37:48.686065   31613 crio.go:444] Took 1.805557 seconds to copy over tarball
	I0108 20:37:48.686159   31613 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 20:37:51.788359   31613 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.102164498s)
	I0108 20:37:51.788393   31613 crio.go:451] Took 3.102292 seconds to extract the tarball
	I0108 20:37:51.788405   31613 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 20:37:51.829305   31613 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:37:51.899200   31613 command_runner.go:130] > {
	I0108 20:37:51.899222   31613 command_runner.go:130] >   "images": [
	I0108 20:37:51.899227   31613 command_runner.go:130] >     {
	I0108 20:37:51.899240   31613 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 20:37:51.899246   31613 command_runner.go:130] >       "repoTags": [
	I0108 20:37:51.899254   31613 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 20:37:51.899258   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899264   31613 command_runner.go:130] >       "repoDigests": [
	I0108 20:37:51.899272   31613 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 20:37:51.899279   31613 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 20:37:51.899282   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899287   31613 command_runner.go:130] >       "size": "65258016",
	I0108 20:37:51.899294   31613 command_runner.go:130] >       "uid": null,
	I0108 20:37:51.899298   31613 command_runner.go:130] >       "username": "",
	I0108 20:37:51.899311   31613 command_runner.go:130] >       "spec": null,
	I0108 20:37:51.899318   31613 command_runner.go:130] >       "pinned": false
	I0108 20:37:51.899322   31613 command_runner.go:130] >     },
	I0108 20:37:51.899328   31613 command_runner.go:130] >     {
	I0108 20:37:51.899334   31613 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 20:37:51.899340   31613 command_runner.go:130] >       "repoTags": [
	I0108 20:37:51.899346   31613 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 20:37:51.899352   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899356   31613 command_runner.go:130] >       "repoDigests": [
	I0108 20:37:51.899364   31613 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 20:37:51.899373   31613 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 20:37:51.899379   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899387   31613 command_runner.go:130] >       "size": "31470524",
	I0108 20:37:51.899394   31613 command_runner.go:130] >       "uid": null,
	I0108 20:37:51.899398   31613 command_runner.go:130] >       "username": "",
	I0108 20:37:51.899401   31613 command_runner.go:130] >       "spec": null,
	I0108 20:37:51.899405   31613 command_runner.go:130] >       "pinned": false
	I0108 20:37:51.899409   31613 command_runner.go:130] >     },
	I0108 20:37:51.899413   31613 command_runner.go:130] >     {
	I0108 20:37:51.899424   31613 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 20:37:51.899431   31613 command_runner.go:130] >       "repoTags": [
	I0108 20:37:51.899436   31613 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 20:37:51.899442   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899446   31613 command_runner.go:130] >       "repoDigests": [
	I0108 20:37:51.899455   31613 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 20:37:51.899464   31613 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 20:37:51.899471   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899476   31613 command_runner.go:130] >       "size": "53621675",
	I0108 20:37:51.899480   31613 command_runner.go:130] >       "uid": null,
	I0108 20:37:51.899486   31613 command_runner.go:130] >       "username": "",
	I0108 20:37:51.899490   31613 command_runner.go:130] >       "spec": null,
	I0108 20:37:51.899496   31613 command_runner.go:130] >       "pinned": false
	I0108 20:37:51.899500   31613 command_runner.go:130] >     },
	I0108 20:37:51.899506   31613 command_runner.go:130] >     {
	I0108 20:37:51.899512   31613 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 20:37:51.899518   31613 command_runner.go:130] >       "repoTags": [
	I0108 20:37:51.899523   31613 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 20:37:51.899531   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899538   31613 command_runner.go:130] >       "repoDigests": [
	I0108 20:37:51.899545   31613 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 20:37:51.899553   31613 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 20:37:51.899571   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899578   31613 command_runner.go:130] >       "size": "295456551",
	I0108 20:37:51.899582   31613 command_runner.go:130] >       "uid": {
	I0108 20:37:51.899586   31613 command_runner.go:130] >         "value": "0"
	I0108 20:37:51.899594   31613 command_runner.go:130] >       },
	I0108 20:37:51.899601   31613 command_runner.go:130] >       "username": "",
	I0108 20:37:51.899605   31613 command_runner.go:130] >       "spec": null,
	I0108 20:37:51.899611   31613 command_runner.go:130] >       "pinned": false
	I0108 20:37:51.899615   31613 command_runner.go:130] >     },
	I0108 20:37:51.899621   31613 command_runner.go:130] >     {
	I0108 20:37:51.899627   31613 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 20:37:51.899633   31613 command_runner.go:130] >       "repoTags": [
	I0108 20:37:51.899638   31613 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 20:37:51.899644   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899650   31613 command_runner.go:130] >       "repoDigests": [
	I0108 20:37:51.899660   31613 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 20:37:51.899670   31613 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 20:37:51.899674   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899678   31613 command_runner.go:130] >       "size": "127226832",
	I0108 20:37:51.899684   31613 command_runner.go:130] >       "uid": {
	I0108 20:37:51.899689   31613 command_runner.go:130] >         "value": "0"
	I0108 20:37:51.899694   31613 command_runner.go:130] >       },
	I0108 20:37:51.899699   31613 command_runner.go:130] >       "username": "",
	I0108 20:37:51.899705   31613 command_runner.go:130] >       "spec": null,
	I0108 20:37:51.899709   31613 command_runner.go:130] >       "pinned": false
	I0108 20:37:51.899715   31613 command_runner.go:130] >     },
	I0108 20:37:51.899718   31613 command_runner.go:130] >     {
	I0108 20:37:51.899726   31613 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 20:37:51.899733   31613 command_runner.go:130] >       "repoTags": [
	I0108 20:37:51.899738   31613 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 20:37:51.899744   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899749   31613 command_runner.go:130] >       "repoDigests": [
	I0108 20:37:51.899762   31613 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 20:37:51.899772   31613 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 20:37:51.899778   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899782   31613 command_runner.go:130] >       "size": "123261750",
	I0108 20:37:51.899789   31613 command_runner.go:130] >       "uid": {
	I0108 20:37:51.899793   31613 command_runner.go:130] >         "value": "0"
	I0108 20:37:51.899807   31613 command_runner.go:130] >       },
	I0108 20:37:51.899813   31613 command_runner.go:130] >       "username": "",
	I0108 20:37:51.899818   31613 command_runner.go:130] >       "spec": null,
	I0108 20:37:51.899824   31613 command_runner.go:130] >       "pinned": false
	I0108 20:37:51.899828   31613 command_runner.go:130] >     },
	I0108 20:37:51.899832   31613 command_runner.go:130] >     {
	I0108 20:37:51.899838   31613 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 20:37:51.899845   31613 command_runner.go:130] >       "repoTags": [
	I0108 20:37:51.899850   31613 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 20:37:51.899856   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899860   31613 command_runner.go:130] >       "repoDigests": [
	I0108 20:37:51.899869   31613 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 20:37:51.899883   31613 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 20:37:51.899889   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899893   31613 command_runner.go:130] >       "size": "74749335",
	I0108 20:37:51.899900   31613 command_runner.go:130] >       "uid": null,
	I0108 20:37:51.899904   31613 command_runner.go:130] >       "username": "",
	I0108 20:37:51.899910   31613 command_runner.go:130] >       "spec": null,
	I0108 20:37:51.899914   31613 command_runner.go:130] >       "pinned": false
	I0108 20:37:51.899920   31613 command_runner.go:130] >     },
	I0108 20:37:51.899924   31613 command_runner.go:130] >     {
	I0108 20:37:51.899932   31613 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 20:37:51.899937   31613 command_runner.go:130] >       "repoTags": [
	I0108 20:37:51.899945   31613 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 20:37:51.899949   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899953   31613 command_runner.go:130] >       "repoDigests": [
	I0108 20:37:51.899974   31613 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 20:37:51.899987   31613 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 20:37:51.899991   31613 command_runner.go:130] >       ],
	I0108 20:37:51.899995   31613 command_runner.go:130] >       "size": "61551410",
	I0108 20:37:51.900002   31613 command_runner.go:130] >       "uid": {
	I0108 20:37:51.900009   31613 command_runner.go:130] >         "value": "0"
	I0108 20:37:51.900013   31613 command_runner.go:130] >       },
	I0108 20:37:51.900019   31613 command_runner.go:130] >       "username": "",
	I0108 20:37:51.900023   31613 command_runner.go:130] >       "spec": null,
	I0108 20:37:51.900029   31613 command_runner.go:130] >       "pinned": false
	I0108 20:37:51.900033   31613 command_runner.go:130] >     },
	I0108 20:37:51.900040   31613 command_runner.go:130] >     {
	I0108 20:37:51.900046   31613 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 20:37:51.900052   31613 command_runner.go:130] >       "repoTags": [
	I0108 20:37:51.900057   31613 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 20:37:51.900064   31613 command_runner.go:130] >       ],
	I0108 20:37:51.900071   31613 command_runner.go:130] >       "repoDigests": [
	I0108 20:37:51.900079   31613 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 20:37:51.900102   31613 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 20:37:51.900108   31613 command_runner.go:130] >       ],
	I0108 20:37:51.900118   31613 command_runner.go:130] >       "size": "750414",
	I0108 20:37:51.900126   31613 command_runner.go:130] >       "uid": {
	I0108 20:37:51.900139   31613 command_runner.go:130] >         "value": "65535"
	I0108 20:37:51.900147   31613 command_runner.go:130] >       },
	I0108 20:37:51.900166   31613 command_runner.go:130] >       "username": "",
	I0108 20:37:51.900177   31613 command_runner.go:130] >       "spec": null,
	I0108 20:37:51.900184   31613 command_runner.go:130] >       "pinned": false
	I0108 20:37:51.900187   31613 command_runner.go:130] >     }
	I0108 20:37:51.900191   31613 command_runner.go:130] >   ]
	I0108 20:37:51.900196   31613 command_runner.go:130] > }
	I0108 20:37:51.900729   31613 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 20:37:51.900744   31613 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:37:51.900834   31613 ssh_runner.go:195] Run: crio config
	I0108 20:37:51.960300   31613 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 20:37:51.960325   31613 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 20:37:51.960332   31613 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 20:37:51.960335   31613 command_runner.go:130] > #
	I0108 20:37:51.960342   31613 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 20:37:51.960348   31613 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 20:37:51.960355   31613 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 20:37:51.960362   31613 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 20:37:51.960365   31613 command_runner.go:130] > # reload'.
	I0108 20:37:51.960371   31613 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 20:37:51.960378   31613 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 20:37:51.960387   31613 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 20:37:51.960395   31613 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 20:37:51.960401   31613 command_runner.go:130] > [crio]
	I0108 20:37:51.960413   31613 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 20:37:51.960422   31613 command_runner.go:130] > # containers images, in this directory.
	I0108 20:37:51.960442   31613 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 20:37:51.960481   31613 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 20:37:51.960562   31613 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 20:37:51.960588   31613 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 20:37:51.960600   31613 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 20:37:51.960608   31613 command_runner.go:130] > storage_driver = "overlay"
	I0108 20:37:51.960620   31613 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 20:37:51.960630   31613 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 20:37:51.960642   31613 command_runner.go:130] > storage_option = [
	I0108 20:37:51.960652   31613 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 20:37:51.960660   31613 command_runner.go:130] > ]
	I0108 20:37:51.960671   31613 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 20:37:51.960698   31613 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 20:37:51.960710   31613 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 20:37:51.960722   31613 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 20:37:51.960736   31613 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 20:37:51.960748   31613 command_runner.go:130] > # always happen on a node reboot
	I0108 20:37:51.960760   31613 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 20:37:51.960773   31613 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 20:37:51.960784   31613 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 20:37:51.960806   31613 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 20:37:51.960819   31613 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 20:37:51.960836   31613 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 20:37:51.960853   31613 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 20:37:51.960870   31613 command_runner.go:130] > # internal_wipe = true
	I0108 20:37:51.960880   31613 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 20:37:51.960893   31613 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 20:37:51.960904   31613 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 20:37:51.960916   31613 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 20:37:51.960929   31613 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 20:37:51.960943   31613 command_runner.go:130] > [crio.api]
	I0108 20:37:51.960954   31613 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 20:37:51.960968   31613 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 20:37:51.960981   31613 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 20:37:51.960990   31613 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 20:37:51.961005   31613 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 20:37:51.961017   31613 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 20:37:51.961025   31613 command_runner.go:130] > # stream_port = "0"
	I0108 20:37:51.961035   31613 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 20:37:51.961050   31613 command_runner.go:130] > # stream_enable_tls = false
	I0108 20:37:51.961063   31613 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 20:37:51.961074   31613 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 20:37:51.961086   31613 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 20:37:51.961101   31613 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 20:37:51.961108   31613 command_runner.go:130] > # minutes.
	I0108 20:37:51.961116   31613 command_runner.go:130] > # stream_tls_cert = ""
	I0108 20:37:51.961129   31613 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 20:37:51.961147   31613 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 20:37:51.961161   31613 command_runner.go:130] > # stream_tls_key = ""
	I0108 20:37:51.961172   31613 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 20:37:51.961185   31613 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 20:37:51.961199   31613 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 20:37:51.961209   31613 command_runner.go:130] > # stream_tls_ca = ""
	I0108 20:37:51.961222   31613 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:37:51.961230   31613 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 20:37:51.961244   31613 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:37:51.961254   31613 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 20:37:51.961304   31613 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 20:37:51.961322   31613 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 20:37:51.961332   31613 command_runner.go:130] > [crio.runtime]
	I0108 20:37:51.961346   31613 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 20:37:51.961358   31613 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 20:37:51.961368   31613 command_runner.go:130] > # "nofile=1024:2048"
	I0108 20:37:51.961382   31613 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 20:37:51.961405   31613 command_runner.go:130] > # default_ulimits = [
	I0108 20:37:51.961415   31613 command_runner.go:130] > # ]
	I0108 20:37:51.961430   31613 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 20:37:51.961441   31613 command_runner.go:130] > # no_pivot = false
	I0108 20:37:51.961450   31613 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 20:37:51.961463   31613 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 20:37:51.961474   31613 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 20:37:51.961484   31613 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 20:37:51.961496   31613 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 20:37:51.961511   31613 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:37:51.961523   31613 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 20:37:51.961534   31613 command_runner.go:130] > # Cgroup setting for conmon
	I0108 20:37:51.961548   31613 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 20:37:51.961559   31613 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 20:37:51.961571   31613 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 20:37:51.961584   31613 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 20:37:51.961599   31613 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:37:51.961609   31613 command_runner.go:130] > conmon_env = [
	I0108 20:37:51.961620   31613 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 20:37:51.961629   31613 command_runner.go:130] > ]
	I0108 20:37:51.961642   31613 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 20:37:51.961653   31613 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 20:37:51.961672   31613 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 20:37:51.961683   31613 command_runner.go:130] > # default_env = [
	I0108 20:37:51.961690   31613 command_runner.go:130] > # ]
	I0108 20:37:51.961704   31613 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 20:37:51.961711   31613 command_runner.go:130] > # selinux = false
	I0108 20:37:51.961724   31613 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 20:37:51.961738   31613 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 20:37:51.961749   31613 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 20:37:51.961763   31613 command_runner.go:130] > # seccomp_profile = ""
	I0108 20:37:51.961777   31613 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 20:37:51.961790   31613 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 20:37:51.961805   31613 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 20:37:51.961816   31613 command_runner.go:130] > # which might increase security.
	I0108 20:37:51.961825   31613 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 20:37:51.961838   31613 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 20:37:51.961852   31613 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 20:37:51.961870   31613 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 20:37:51.961884   31613 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 20:37:51.961895   31613 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:37:51.961903   31613 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 20:37:51.961915   31613 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 20:37:51.961926   31613 command_runner.go:130] > # the cgroup blockio controller.
	I0108 20:37:51.961937   31613 command_runner.go:130] > # blockio_config_file = ""
	I0108 20:37:51.961948   31613 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 20:37:51.961959   31613 command_runner.go:130] > # irqbalance daemon.
	I0108 20:37:51.961971   31613 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 20:37:51.961986   31613 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 20:37:51.961999   31613 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:37:51.962010   31613 command_runner.go:130] > # rdt_config_file = ""
	I0108 20:37:51.962022   31613 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 20:37:51.962037   31613 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 20:37:51.962050   31613 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 20:37:51.962060   31613 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 20:37:51.962070   31613 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 20:37:51.962091   31613 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 20:37:51.962099   31613 command_runner.go:130] > # will be added.
	I0108 20:37:51.962108   31613 command_runner.go:130] > # default_capabilities = [
	I0108 20:37:51.962114   31613 command_runner.go:130] > # 	"CHOWN",
	I0108 20:37:51.962123   31613 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 20:37:51.962134   31613 command_runner.go:130] > # 	"FSETID",
	I0108 20:37:51.962144   31613 command_runner.go:130] > # 	"FOWNER",
	I0108 20:37:51.962154   31613 command_runner.go:130] > # 	"SETGID",
	I0108 20:37:51.962162   31613 command_runner.go:130] > # 	"SETUID",
	I0108 20:37:51.962172   31613 command_runner.go:130] > # 	"SETPCAP",
	I0108 20:37:51.962181   31613 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 20:37:51.962192   31613 command_runner.go:130] > # 	"KILL",
	I0108 20:37:51.962202   31613 command_runner.go:130] > # ]
	I0108 20:37:51.962212   31613 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 20:37:51.962224   31613 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:37:51.962233   31613 command_runner.go:130] > # default_sysctls = [
	I0108 20:37:51.962244   31613 command_runner.go:130] > # ]
	I0108 20:37:51.962251   31613 command_runner.go:130] > # List of devices on the host that a
	I0108 20:37:51.962267   31613 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 20:37:51.962278   31613 command_runner.go:130] > # allowed_devices = [
	I0108 20:37:51.962285   31613 command_runner.go:130] > # 	"/dev/fuse",
	I0108 20:37:51.962293   31613 command_runner.go:130] > # ]
	I0108 20:37:51.962302   31613 command_runner.go:130] > # List of additional devices. specified as
	I0108 20:37:51.962316   31613 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 20:37:51.962329   31613 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 20:37:51.962403   31613 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:37:51.962418   31613 command_runner.go:130] > # additional_devices = [
	I0108 20:37:51.962425   31613 command_runner.go:130] > # ]
	I0108 20:37:51.962434   31613 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 20:37:51.962443   31613 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 20:37:51.962451   31613 command_runner.go:130] > # 	"/etc/cdi",
	I0108 20:37:51.962461   31613 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 20:37:51.962471   31613 command_runner.go:130] > # ]
	I0108 20:37:51.962483   31613 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 20:37:51.962496   31613 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 20:37:51.962506   31613 command_runner.go:130] > # Defaults to false.
	I0108 20:37:51.962519   31613 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 20:37:51.962531   31613 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 20:37:51.962541   31613 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 20:37:51.962550   31613 command_runner.go:130] > # hooks_dir = [
	I0108 20:37:51.962558   31613 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 20:37:51.962566   31613 command_runner.go:130] > # ]
	I0108 20:37:51.962575   31613 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 20:37:51.962587   31613 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 20:37:51.962597   31613 command_runner.go:130] > # its default mounts from the following two files:
	I0108 20:37:51.962607   31613 command_runner.go:130] > #
	I0108 20:37:51.962619   31613 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 20:37:51.962640   31613 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 20:37:51.962654   31613 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 20:37:51.962662   31613 command_runner.go:130] > #
	I0108 20:37:51.962675   31613 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 20:37:51.962688   31613 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 20:37:51.962700   31613 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 20:37:51.962711   31613 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 20:37:51.962722   31613 command_runner.go:130] > #
	I0108 20:37:51.962730   31613 command_runner.go:130] > # default_mounts_file = ""
	I0108 20:37:51.962738   31613 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 20:37:51.962752   31613 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 20:37:51.962762   31613 command_runner.go:130] > pids_limit = 1024
	I0108 20:37:51.962772   31613 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 20:37:51.962784   31613 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 20:37:51.962797   31613 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 20:37:51.962816   31613 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 20:37:51.962825   31613 command_runner.go:130] > # log_size_max = -1
	I0108 20:37:51.962839   31613 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 20:37:51.962849   31613 command_runner.go:130] > # log_to_journald = false
	I0108 20:37:51.962862   31613 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 20:37:51.962873   31613 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 20:37:51.962885   31613 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 20:37:51.962894   31613 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 20:37:51.962906   31613 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 20:37:51.962916   31613 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 20:37:51.962934   31613 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 20:37:51.962944   31613 command_runner.go:130] > # read_only = false
	I0108 20:37:51.962955   31613 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 20:37:51.962968   31613 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 20:37:51.962977   31613 command_runner.go:130] > # live configuration reload.
	I0108 20:37:51.962985   31613 command_runner.go:130] > # log_level = "info"
	I0108 20:37:51.962993   31613 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 20:37:51.963001   31613 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:37:51.963005   31613 command_runner.go:130] > # log_filter = ""
	I0108 20:37:51.963013   31613 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 20:37:51.963019   31613 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 20:37:51.963026   31613 command_runner.go:130] > # separated by comma.
	I0108 20:37:51.963033   31613 command_runner.go:130] > # uid_mappings = ""
	I0108 20:37:51.963041   31613 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 20:37:51.963047   31613 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 20:37:51.963054   31613 command_runner.go:130] > # separated by comma.
	I0108 20:37:51.963058   31613 command_runner.go:130] > # gid_mappings = ""
	I0108 20:37:51.963067   31613 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 20:37:51.963077   31613 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:37:51.963085   31613 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:37:51.963092   31613 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 20:37:51.963098   31613 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 20:37:51.963106   31613 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:37:51.963112   31613 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:37:51.963119   31613 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 20:37:51.963125   31613 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 20:37:51.963133   31613 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 20:37:51.963142   31613 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 20:37:51.963146   31613 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 20:37:51.963153   31613 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 20:37:51.963159   31613 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 20:37:51.963166   31613 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 20:37:51.963171   31613 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 20:37:51.963177   31613 command_runner.go:130] > drop_infra_ctr = false
	I0108 20:37:51.963184   31613 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 20:37:51.963191   31613 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 20:37:51.963201   31613 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 20:37:51.963211   31613 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 20:37:51.963223   31613 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 20:37:51.963234   31613 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 20:37:51.963244   31613 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 20:37:51.963258   31613 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 20:37:51.963268   31613 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 20:37:51.963281   31613 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 20:37:51.963296   31613 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 20:37:51.963309   31613 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 20:37:51.963318   31613 command_runner.go:130] > # default_runtime = "runc"
	I0108 20:37:51.963326   31613 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 20:37:51.963338   31613 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 20:37:51.963349   31613 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 20:37:51.963355   31613 command_runner.go:130] > # creation as a file is not desired either.
	I0108 20:37:51.963363   31613 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 20:37:51.963374   31613 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 20:37:51.963381   31613 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 20:37:51.963387   31613 command_runner.go:130] > # ]
	I0108 20:37:51.963400   31613 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 20:37:51.963407   31613 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 20:37:51.963415   31613 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 20:37:51.963424   31613 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 20:37:51.963429   31613 command_runner.go:130] > #
	I0108 20:37:51.963434   31613 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 20:37:51.963441   31613 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 20:37:51.963446   31613 command_runner.go:130] > #  runtime_type = "oci"
	I0108 20:37:51.963454   31613 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 20:37:51.963459   31613 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 20:37:51.963466   31613 command_runner.go:130] > #  allowed_annotations = []
	I0108 20:37:51.963470   31613 command_runner.go:130] > # Where:
	I0108 20:37:51.963478   31613 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 20:37:51.963486   31613 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 20:37:51.963493   31613 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 20:37:51.963501   31613 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 20:37:51.963508   31613 command_runner.go:130] > #   in $PATH.
	I0108 20:37:51.963517   31613 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 20:37:51.963524   31613 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 20:37:51.963530   31613 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 20:37:51.963537   31613 command_runner.go:130] > #   state.
	I0108 20:37:51.963543   31613 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 20:37:51.963551   31613 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 20:37:51.963558   31613 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 20:37:51.963565   31613 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 20:37:51.963574   31613 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 20:37:51.963583   31613 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 20:37:51.963590   31613 command_runner.go:130] > #   The currently recognized values are:
	I0108 20:37:51.963596   31613 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 20:37:51.963607   31613 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 20:37:51.963615   31613 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 20:37:51.963621   31613 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 20:37:51.963630   31613 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 20:37:51.963639   31613 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 20:37:51.963648   31613 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 20:37:51.963659   31613 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 20:37:51.963666   31613 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 20:37:51.963671   31613 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 20:37:51.963677   31613 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 20:37:51.963682   31613 command_runner.go:130] > runtime_type = "oci"
	I0108 20:37:51.963688   31613 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 20:37:51.963693   31613 command_runner.go:130] > runtime_config_path = ""
	I0108 20:37:51.963699   31613 command_runner.go:130] > monitor_path = ""
	I0108 20:37:51.963703   31613 command_runner.go:130] > monitor_cgroup = ""
	I0108 20:37:51.963710   31613 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 20:37:51.963716   31613 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 20:37:51.963722   31613 command_runner.go:130] > # running containers
	I0108 20:37:51.963726   31613 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 20:37:51.963738   31613 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 20:37:51.963787   31613 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 20:37:51.963797   31613 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 20:37:51.963801   31613 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 20:37:51.963806   31613 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 20:37:51.963813   31613 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 20:37:51.963820   31613 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 20:37:51.963824   31613 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 20:37:51.963831   31613 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 20:37:51.963837   31613 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 20:37:51.963845   31613 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 20:37:51.963851   31613 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 20:37:51.963860   31613 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 20:37:51.963870   31613 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 20:37:51.963878   31613 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 20:37:51.963887   31613 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 20:37:51.963897   31613 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 20:37:51.963907   31613 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 20:37:51.963914   31613 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 20:37:51.963920   31613 command_runner.go:130] > # Example:
	I0108 20:37:51.963925   31613 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 20:37:51.963932   31613 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 20:37:51.963937   31613 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 20:37:51.963947   31613 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 20:37:51.963953   31613 command_runner.go:130] > # cpuset = 0
	I0108 20:37:51.963957   31613 command_runner.go:130] > # cpushares = "0-1"
	I0108 20:37:51.963963   31613 command_runner.go:130] > # Where:
	I0108 20:37:51.963968   31613 command_runner.go:130] > # The workload name is workload-type.
	I0108 20:37:51.963977   31613 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 20:37:51.963984   31613 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 20:37:51.963991   31613 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 20:37:51.964002   31613 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 20:37:51.964010   31613 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 20:37:51.964014   31613 command_runner.go:130] > # 
	I0108 20:37:51.964023   31613 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 20:37:51.964031   31613 command_runner.go:130] > #
	I0108 20:37:51.964041   31613 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 20:37:51.964054   31613 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 20:37:51.964066   31613 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 20:37:51.964080   31613 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 20:37:51.964102   31613 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 20:37:51.964121   31613 command_runner.go:130] > [crio.image]
	I0108 20:37:51.964129   31613 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 20:37:51.964136   31613 command_runner.go:130] > # default_transport = "docker://"
	I0108 20:37:51.964142   31613 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 20:37:51.964150   31613 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:37:51.964156   31613 command_runner.go:130] > # global_auth_file = ""
	I0108 20:37:51.964161   31613 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 20:37:51.964168   31613 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:37:51.964173   31613 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 20:37:51.964182   31613 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 20:37:51.964190   31613 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:37:51.964195   31613 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:37:51.964202   31613 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 20:37:51.964211   31613 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 20:37:51.964221   31613 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 20:37:51.964230   31613 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 20:37:51.964239   31613 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 20:37:51.964245   31613 command_runner.go:130] > # pause_command = "/pause"
	I0108 20:37:51.964258   31613 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 20:37:51.964267   31613 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 20:37:51.964276   31613 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 20:37:51.964285   31613 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 20:37:51.964293   31613 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 20:37:51.964300   31613 command_runner.go:130] > # signature_policy = ""
	I0108 20:37:51.964310   31613 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 20:37:51.964320   31613 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 20:37:51.964326   31613 command_runner.go:130] > # changing them here.
	I0108 20:37:51.964331   31613 command_runner.go:130] > # insecure_registries = [
	I0108 20:37:51.964334   31613 command_runner.go:130] > # ]
	I0108 20:37:51.964340   31613 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 20:37:51.964345   31613 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 20:37:51.964349   31613 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 20:37:51.964355   31613 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 20:37:51.964359   31613 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 20:37:51.964365   31613 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 20:37:51.964368   31613 command_runner.go:130] > # CNI plugins.
	I0108 20:37:51.964378   31613 command_runner.go:130] > [crio.network]
	I0108 20:37:51.964384   31613 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 20:37:51.964389   31613 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 20:37:51.964397   31613 command_runner.go:130] > # cni_default_network = ""
	I0108 20:37:51.964402   31613 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 20:37:51.964406   31613 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 20:37:51.964412   31613 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 20:37:51.964418   31613 command_runner.go:130] > # plugin_dirs = [
	I0108 20:37:51.964422   31613 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 20:37:51.964426   31613 command_runner.go:130] > # ]
	I0108 20:37:51.964432   31613 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 20:37:51.964438   31613 command_runner.go:130] > [crio.metrics]
	I0108 20:37:51.964443   31613 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 20:37:51.964449   31613 command_runner.go:130] > enable_metrics = true
	I0108 20:37:51.964454   31613 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 20:37:51.964462   31613 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 20:37:51.964468   31613 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 20:37:51.964478   31613 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 20:37:51.964489   31613 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 20:37:51.964495   31613 command_runner.go:130] > # metrics_collectors = [
	I0108 20:37:51.964499   31613 command_runner.go:130] > # 	"operations",
	I0108 20:37:51.964505   31613 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 20:37:51.964510   31613 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 20:37:51.964517   31613 command_runner.go:130] > # 	"operations_errors",
	I0108 20:37:51.964521   31613 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 20:37:51.964528   31613 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 20:37:51.964532   31613 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 20:37:51.964536   31613 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 20:37:51.964543   31613 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 20:37:51.964547   31613 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 20:37:51.964551   31613 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 20:37:51.964558   31613 command_runner.go:130] > # 	"containers_oom_total",
	I0108 20:37:51.964562   31613 command_runner.go:130] > # 	"containers_oom",
	I0108 20:37:51.964568   31613 command_runner.go:130] > # 	"processes_defunct",
	I0108 20:37:51.964572   31613 command_runner.go:130] > # 	"operations_total",
	I0108 20:37:51.964578   31613 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 20:37:51.964586   31613 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 20:37:51.964592   31613 command_runner.go:130] > # 	"operations_errors_total",
	I0108 20:37:51.964597   31613 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 20:37:51.964603   31613 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 20:37:51.964608   31613 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 20:37:51.964614   31613 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 20:37:51.964619   31613 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 20:37:51.964624   31613 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 20:37:51.964628   31613 command_runner.go:130] > # ]
	I0108 20:37:51.964633   31613 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 20:37:51.964639   31613 command_runner.go:130] > # metrics_port = 9090
	I0108 20:37:51.964644   31613 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 20:37:51.964650   31613 command_runner.go:130] > # metrics_socket = ""
	I0108 20:37:51.964657   31613 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 20:37:51.964669   31613 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 20:37:51.964685   31613 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 20:37:51.964695   31613 command_runner.go:130] > # certificate on any modification event.
	I0108 20:37:51.964701   31613 command_runner.go:130] > # metrics_cert = ""
	I0108 20:37:51.964715   31613 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 20:37:51.964726   31613 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 20:37:51.964732   31613 command_runner.go:130] > # metrics_key = ""
	I0108 20:37:51.964746   31613 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 20:37:51.964762   31613 command_runner.go:130] > [crio.tracing]
	I0108 20:37:51.964773   31613 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 20:37:51.964783   31613 command_runner.go:130] > # enable_tracing = false
	I0108 20:37:51.964797   31613 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 20:37:51.964805   31613 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 20:37:51.964810   31613 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 20:37:51.964817   31613 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 20:37:51.964822   31613 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 20:37:51.964828   31613 command_runner.go:130] > [crio.stats]
	I0108 20:37:51.964834   31613 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 20:37:51.964842   31613 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 20:37:51.964849   31613 command_runner.go:130] > # stats_collection_period = 0
	I0108 20:37:51.964885   31613 command_runner.go:130] ! time="2024-01-08 20:37:51.937265815Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 20:37:51.964897   31613 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 20:37:51.964975   31613 cni.go:84] Creating CNI manager for ""
	I0108 20:37:51.964987   31613 cni.go:136] 1 nodes found, recommending kindnet
	I0108 20:37:51.965004   31613 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:37:51.965023   31613 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-340815 NodeName:multinode-340815 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:37:51.965145   31613 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-340815"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:37:51.965210   31613 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-340815 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:37:51.965270   31613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:37:51.974299   31613 command_runner.go:130] > kubeadm
	I0108 20:37:51.974325   31613 command_runner.go:130] > kubectl
	I0108 20:37:51.974329   31613 command_runner.go:130] > kubelet
	I0108 20:37:51.974345   31613 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:37:51.974401   31613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:37:51.982910   31613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0108 20:37:51.999574   31613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:37:52.016492   31613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0108 20:37:52.033019   31613 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0108 20:37:52.037080   31613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:37:52.050222   31613 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815 for IP: 192.168.39.196
	I0108 20:37:52.050257   31613 certs.go:190] acquiring lock for shared ca certs: {Name:mke01aa9d73e320a9a3907677cf29c75f0fa86d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:37:52.050408   31613 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key
	I0108 20:37:52.050464   31613 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key
	I0108 20:37:52.050521   31613 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key
	I0108 20:37:52.050537   31613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt with IP's: []
	I0108 20:37:52.211569   31613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt ...
	I0108 20:37:52.211605   31613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt: {Name:mkab69734384e8a1f54f09b3ac0c02004a050511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:37:52.211800   31613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key ...
	I0108 20:37:52.211814   31613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key: {Name:mk4377c3c6e54e0279723d06f91834bab8b4c2fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:37:52.211915   31613 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.key.85aad866
	I0108 20:37:52.211933   31613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.crt.85aad866 with IP's: [192.168.39.196 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 20:37:52.527668   31613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.crt.85aad866 ...
	I0108 20:37:52.527707   31613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.crt.85aad866: {Name:mk95d47940009f80df286e0a131d9a95480613d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:37:52.527884   31613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.key.85aad866 ...
	I0108 20:37:52.527901   31613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.key.85aad866: {Name:mka5dbe2e0ca4d8efb37c579f3029fcd08a8f84a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:37:52.527987   31613 certs.go:337] copying /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.crt.85aad866 -> /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.crt
	I0108 20:37:52.528108   31613 certs.go:341] copying /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.key.85aad866 -> /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.key
	I0108 20:37:52.528183   31613 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.key
	I0108 20:37:52.528198   31613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.crt with IP's: []
	I0108 20:37:52.641977   31613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.crt ...
	I0108 20:37:52.642004   31613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.crt: {Name:mk58715159c4278350b754398c4138a2e5ce821c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:37:52.642167   31613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.key ...
	I0108 20:37:52.642183   31613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.key: {Name:mk91973144d021c4ee906c1d2b99f1aaa480cb8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:37:52.642252   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 20:37:52.642270   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 20:37:52.642280   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 20:37:52.642304   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 20:37:52.642316   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:37:52.642328   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:37:52.642342   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:37:52.642354   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:37:52.642400   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem (1338 bytes)
	W0108 20:37:52.642434   31613 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896_empty.pem, impossibly tiny 0 bytes
	I0108 20:37:52.642445   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:37:52.642467   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem (1082 bytes)
	I0108 20:37:52.642489   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:37:52.642516   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem (1675 bytes)
	I0108 20:37:52.642553   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:37:52.642581   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:37:52.642593   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem -> /usr/share/ca-certificates/17896.pem
	I0108 20:37:52.642605   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /usr/share/ca-certificates/178962.pem
	I0108 20:37:52.643188   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:37:52.668736   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:37:52.692864   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:37:52.716924   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 20:37:52.742343   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:37:52.766929   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:37:52.789891   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:37:52.813125   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:37:52.836668   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:37:52.859965   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem --> /usr/share/ca-certificates/17896.pem (1338 bytes)
	I0108 20:37:52.884152   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /usr/share/ca-certificates/178962.pem (1708 bytes)
	I0108 20:37:52.907774   31613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:37:52.924062   31613 ssh_runner.go:195] Run: openssl version
	I0108 20:37:52.929662   31613 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 20:37:52.929743   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:37:52.939591   31613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:37:52.944152   31613 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:37:52.944527   31613 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:37:52.944607   31613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:37:52.950207   31613 command_runner.go:130] > b5213941
	I0108 20:37:52.950554   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:37:52.960547   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17896.pem && ln -fs /usr/share/ca-certificates/17896.pem /etc/ssl/certs/17896.pem"
	I0108 20:37:52.970685   31613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17896.pem
	I0108 20:37:52.975341   31613 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 20:37:52.975564   31613 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 20:37:52.975631   31613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17896.pem
	I0108 20:37:52.981189   31613 command_runner.go:130] > 51391683
	I0108 20:37:52.981524   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17896.pem /etc/ssl/certs/51391683.0"
	I0108 20:37:52.991689   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178962.pem && ln -fs /usr/share/ca-certificates/178962.pem /etc/ssl/certs/178962.pem"
	I0108 20:37:53.001790   31613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178962.pem
	I0108 20:37:53.006448   31613 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 20:37:53.006647   31613 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 20:37:53.006696   31613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178962.pem
	I0108 20:37:53.012346   31613 command_runner.go:130] > 3ec20f2e
	I0108 20:37:53.012553   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178962.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:37:53.023434   31613 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:37:53.028082   31613 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:37:53.028449   31613 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:37:53.028526   31613 kubeadm.go:404] StartCluster: {Name:multinode-340815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:37:53.028614   31613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 20:37:53.028691   31613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:37:53.066609   31613 cri.go:89] found id: ""
	I0108 20:37:53.066671   31613 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:37:53.076589   31613 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0108 20:37:53.076623   31613 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0108 20:37:53.076634   31613 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0108 20:37:53.076704   31613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:37:53.085732   31613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:37:53.094989   31613 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 20:37:53.095024   31613 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 20:37:53.095041   31613 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 20:37:53.095052   31613 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:37:53.095095   31613 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:37:53.095127   31613 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 20:37:53.478968   31613 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:37:53.478996   31613 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:38:05.844996   31613 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 20:38:05.845022   31613 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0108 20:38:05.845053   31613 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 20:38:05.845057   31613 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 20:38:05.845142   31613 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:38:05.845162   31613 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 20:38:05.845253   31613 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:38:05.845264   31613 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 20:38:05.845370   31613 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:38:05.845380   31613 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 20:38:05.845462   31613 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:38:05.847496   31613 out.go:204]   - Generating certificates and keys ...
	I0108 20:38:05.845484   31613 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:38:05.847617   31613 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 20:38:05.847641   31613 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 20:38:05.847719   31613 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 20:38:05.847731   31613 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 20:38:05.847816   31613 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:38:05.847833   31613 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 20:38:05.847904   31613 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:38:05.847927   31613 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0108 20:38:05.848024   31613 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 20:38:05.848039   31613 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0108 20:38:05.848124   31613 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 20:38:05.848136   31613 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0108 20:38:05.848197   31613 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 20:38:05.848211   31613 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0108 20:38:05.848372   31613 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-340815] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0108 20:38:05.848385   31613 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-340815] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0108 20:38:05.848482   31613 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 20:38:05.848501   31613 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0108 20:38:05.848634   31613 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-340815] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0108 20:38:05.848649   31613 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-340815] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0108 20:38:05.848738   31613 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:38:05.848751   31613 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 20:38:05.848827   31613 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:38:05.848847   31613 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 20:38:05.848902   31613 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 20:38:05.848911   31613 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0108 20:38:05.848985   31613 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:38:05.848995   31613 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:38:05.849059   31613 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:38:05.849068   31613 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:38:05.849131   31613 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:38:05.849144   31613 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:38:05.849211   31613 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:38:05.849228   31613 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:38:05.849306   31613 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:38:05.849317   31613 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:38:05.849413   31613 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:38:05.849429   31613 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:38:05.849521   31613 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:38:05.852501   31613 out.go:204]   - Booting up control plane ...
	I0108 20:38:05.849538   31613 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:38:05.852599   31613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:38:05.852613   31613 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:38:05.852686   31613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:38:05.852693   31613 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:38:05.852751   31613 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:38:05.852771   31613 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:38:05.852880   31613 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:38:05.852897   31613 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:38:05.853023   31613 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:38:05.853036   31613 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:38:05.853086   31613 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 20:38:05.853096   31613 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 20:38:05.853215   31613 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:38:05.853222   31613 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 20:38:05.853329   31613 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003169 seconds
	I0108 20:38:05.853344   31613 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.003169 seconds
	I0108 20:38:05.853455   31613 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:38:05.853477   31613 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 20:38:05.853630   31613 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:38:05.853644   31613 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 20:38:05.853715   31613 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:38:05.853746   31613 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0108 20:38:05.853931   31613 kubeadm.go:322] [mark-control-plane] Marking the node multinode-340815 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 20:38:05.853944   31613 command_runner.go:130] > [mark-control-plane] Marking the node multinode-340815 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 20:38:05.854013   31613 kubeadm.go:322] [bootstrap-token] Using token: r7z0gj.1utfyo1i20twyh0k
	I0108 20:38:05.855944   31613 out.go:204]   - Configuring RBAC rules ...
	I0108 20:38:05.854095   31613 command_runner.go:130] > [bootstrap-token] Using token: r7z0gj.1utfyo1i20twyh0k
	I0108 20:38:05.856067   31613 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:38:05.856080   31613 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 20:38:05.856181   31613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:38:05.856189   31613 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 20:38:05.856293   31613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:38:05.856300   31613 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 20:38:05.856396   31613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:38:05.856403   31613 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 20:38:05.856487   31613 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:38:05.856493   31613 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 20:38:05.856554   31613 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:38:05.856566   31613 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 20:38:05.856654   31613 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:38:05.856661   31613 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 20:38:05.856694   31613 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 20:38:05.856701   31613 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 20:38:05.856776   31613 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 20:38:05.856797   31613 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 20:38:05.856810   31613 kubeadm.go:322] 
	I0108 20:38:05.856889   31613 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 20:38:05.856898   31613 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0108 20:38:05.856906   31613 kubeadm.go:322] 
	I0108 20:38:05.856991   31613 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 20:38:05.857013   31613 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0108 20:38:05.857024   31613 kubeadm.go:322] 
	I0108 20:38:05.857063   31613 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 20:38:05.857072   31613 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0108 20:38:05.857188   31613 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:38:05.857222   31613 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 20:38:05.857287   31613 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:38:05.857302   31613 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 20:38:05.857319   31613 kubeadm.go:322] 
	I0108 20:38:05.857378   31613 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 20:38:05.857388   31613 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0108 20:38:05.857391   31613 kubeadm.go:322] 
	I0108 20:38:05.857465   31613 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 20:38:05.857474   31613 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 20:38:05.857477   31613 kubeadm.go:322] 
	I0108 20:38:05.857515   31613 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 20:38:05.857521   31613 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0108 20:38:05.857580   31613 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:38:05.857588   31613 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 20:38:05.857646   31613 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:38:05.857652   31613 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 20:38:05.857655   31613 kubeadm.go:322] 
	I0108 20:38:05.857719   31613 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:38:05.857728   31613 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0108 20:38:05.857832   31613 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 20:38:05.857849   31613 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0108 20:38:05.857854   31613 kubeadm.go:322] 
	I0108 20:38:05.857914   31613 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token r7z0gj.1utfyo1i20twyh0k \
	I0108 20:38:05.857921   31613 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token r7z0gj.1utfyo1i20twyh0k \
	I0108 20:38:05.857999   31613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 \
	I0108 20:38:05.858005   31613 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 \
	I0108 20:38:05.858020   31613 kubeadm.go:322] 	--control-plane 
	I0108 20:38:05.858027   31613 command_runner.go:130] > 	--control-plane 
	I0108 20:38:05.858030   31613 kubeadm.go:322] 
	I0108 20:38:05.858099   31613 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:38:05.858121   31613 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0108 20:38:05.858146   31613 kubeadm.go:322] 
	I0108 20:38:05.858253   31613 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token r7z0gj.1utfyo1i20twyh0k \
	I0108 20:38:05.858263   31613 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token r7z0gj.1utfyo1i20twyh0k \
	I0108 20:38:05.858377   31613 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 
	I0108 20:38:05.858388   31613 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 
	I0108 20:38:05.858413   31613 cni.go:84] Creating CNI manager for ""
	I0108 20:38:05.858423   31613 cni.go:136] 1 nodes found, recommending kindnet
	I0108 20:38:05.860471   31613 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:38:05.862138   31613 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:38:05.868263   31613 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 20:38:05.868291   31613 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 20:38:05.868301   31613 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 20:38:05.868310   31613 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:38:05.868327   31613 command_runner.go:130] > Access: 2024-01-08 20:37:34.195702624 +0000
	I0108 20:38:05.868349   31613 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 20:38:05.868360   31613 command_runner.go:130] > Change: 2024-01-08 20:37:32.351702624 +0000
	I0108 20:38:05.868380   31613 command_runner.go:130] >  Birth: -
	I0108 20:38:05.877830   31613 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:38:05.877857   31613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:38:05.900950   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:38:06.884436   31613 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0108 20:38:06.897346   31613 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0108 20:38:06.909227   31613 command_runner.go:130] > serviceaccount/kindnet created
	I0108 20:38:06.925846   31613 command_runner.go:130] > daemonset.apps/kindnet created
	I0108 20:38:06.928418   31613 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.02741599s)
	I0108 20:38:06.928474   31613 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:38:06.928560   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:06.928573   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-340815 minikube.k8s.io/updated_at=2024_01_08T20_38_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:06.964046   31613 command_runner.go:130] > -16
	I0108 20:38:06.964098   31613 ops.go:34] apiserver oom_adj: -16
	I0108 20:38:07.166534   31613 command_runner.go:130] > node/multinode-340815 labeled
	I0108 20:38:07.168545   31613 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0108 20:38:07.168677   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:07.269992   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:07.669597   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:07.759275   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:08.168771   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:08.259368   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:08.669311   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:08.759101   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:09.169415   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:09.258871   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:09.669302   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:09.755831   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:10.169352   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:10.261696   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:10.668884   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:10.784372   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:11.169466   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:11.256074   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:11.669329   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:11.754053   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:12.169376   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:12.254958   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:12.669788   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:12.771385   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:13.169081   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:13.253730   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:13.668906   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:13.755884   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:14.169334   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:14.267997   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:14.669642   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:14.761564   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:15.169383   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:15.256471   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:15.669741   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:15.767630   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:16.169687   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:16.272684   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:16.668945   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:16.773213   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:17.169159   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:17.300977   31613 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 20:38:17.669322   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:38:17.776444   31613 command_runner.go:130] > NAME      SECRETS   AGE
	I0108 20:38:17.776474   31613 command_runner.go:130] > default   0         0s
	I0108 20:38:17.778298   31613 kubeadm.go:1088] duration metric: took 10.849802028s to wait for elevateKubeSystemPrivileges.
	I0108 20:38:17.778335   31613 kubeadm.go:406] StartCluster complete in 24.74982995s
	I0108 20:38:17.778359   31613 settings.go:142] acquiring lock: {Name:mk91d3baf51872e4bb0758b94fca7c7249bb9666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:38:17.778426   31613 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:38:17.779105   31613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/kubeconfig: {Name:mkeb2e8a20e31c0c2d5c7e8214a27af3141300ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:38:17.779331   31613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:38:17.779350   31613 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 20:38:17.779426   31613 addons.go:69] Setting storage-provisioner=true in profile "multinode-340815"
	I0108 20:38:17.779443   31613 addons.go:237] Setting addon storage-provisioner=true in "multinode-340815"
	I0108 20:38:17.779454   31613 addons.go:69] Setting default-storageclass=true in profile "multinode-340815"
	I0108 20:38:17.779477   31613 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-340815"
	I0108 20:38:17.779520   31613 host.go:66] Checking if "multinode-340815" exists ...
	I0108 20:38:17.779577   31613 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:38:17.779667   31613 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:38:17.779971   31613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:38:17.779990   31613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:38:17.780020   31613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:38:17.779987   31613 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:38:17.780131   31613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:38:17.780741   31613 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 20:38:17.781061   31613 round_trippers.go:463] GET https://192.168.39.196:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:38:17.781078   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:17.781089   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:17.781098   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:17.793368   31613 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0108 20:38:17.793392   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:17.793402   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:17.793409   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:17.793415   31613 round_trippers.go:580]     Content-Length: 291
	I0108 20:38:17.793423   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:17 GMT
	I0108 20:38:17.793429   31613 round_trippers.go:580]     Audit-Id: 57edd55e-9a43-4e8a-938b-f7dcd0114454
	I0108 20:38:17.793438   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:17.793450   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:17.793493   31613 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a90ea09-afeb-4dda-ab10-18a22e37ea78","resourceVersion":"233","creationTimestamp":"2024-01-08T20:38:05Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 20:38:17.793888   31613 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a90ea09-afeb-4dda-ab10-18a22e37ea78","resourceVersion":"233","creationTimestamp":"2024-01-08T20:38:05Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 20:38:17.793947   31613 round_trippers.go:463] PUT https://192.168.39.196:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:38:17.793965   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:17.793976   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:17.793986   31613 round_trippers.go:473]     Content-Type: application/json
	I0108 20:38:17.793997   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:17.795872   31613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I0108 20:38:17.795971   31613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40609
	I0108 20:38:17.796322   31613 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:38:17.796385   31613 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:38:17.796879   31613 main.go:141] libmachine: Using API Version  1
	I0108 20:38:17.796880   31613 main.go:141] libmachine: Using API Version  1
	I0108 20:38:17.796908   31613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:38:17.796925   31613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:38:17.797298   31613 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:38:17.797299   31613 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:38:17.797491   31613 main.go:141] libmachine: (multinode-340815) Calling .GetState
	I0108 20:38:17.797842   31613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:38:17.797882   31613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:38:17.799763   31613 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:38:17.800068   31613 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:38:17.800405   31613 addons.go:237] Setting addon default-storageclass=true in "multinode-340815"
	I0108 20:38:17.800445   31613 host.go:66] Checking if "multinode-340815" exists ...
	I0108 20:38:17.800795   31613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:38:17.800828   31613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:38:17.808311   31613 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0108 20:38:17.808364   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:17.808373   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:17.808380   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:17.808390   31613 round_trippers.go:580]     Content-Length: 291
	I0108 20:38:17.808398   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:17 GMT
	I0108 20:38:17.808406   31613 round_trippers.go:580]     Audit-Id: e79568b8-2e73-4948-bdb9-d2dfa8c48363
	I0108 20:38:17.808418   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:17.808427   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:17.808461   31613 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a90ea09-afeb-4dda-ab10-18a22e37ea78","resourceVersion":"313","creationTimestamp":"2024-01-08T20:38:05Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 20:38:17.813845   31613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42883
	I0108 20:38:17.814290   31613 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:38:17.814756   31613 main.go:141] libmachine: Using API Version  1
	I0108 20:38:17.814778   31613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:38:17.815158   31613 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:38:17.815355   31613 main.go:141] libmachine: (multinode-340815) Calling .GetState
	I0108 20:38:17.817053   31613 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:38:17.819430   31613 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 20:38:17.820481   31613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45927
	I0108 20:38:17.821065   31613 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:38:17.821081   31613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 20:38:17.821099   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:38:17.821412   31613 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:38:17.821878   31613 main.go:141] libmachine: Using API Version  1
	I0108 20:38:17.821894   31613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:38:17.822335   31613 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:38:17.822819   31613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:38:17.822865   31613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:38:17.824125   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:38:17.824655   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:38:17.824685   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:38:17.824862   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:38:17.825074   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:38:17.825224   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:38:17.825377   31613 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:38:17.837927   31613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0108 20:38:17.838464   31613 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:38:17.838956   31613 main.go:141] libmachine: Using API Version  1
	I0108 20:38:17.838982   31613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:38:17.839233   31613 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:38:17.839332   31613 main.go:141] libmachine: (multinode-340815) Calling .GetState
	I0108 20:38:17.840924   31613 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:38:17.841200   31613 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 20:38:17.841217   31613 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 20:38:17.841237   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:38:17.844510   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:38:17.845151   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:38:17.845182   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:38:17.845370   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:38:17.845575   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:38:17.845764   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:38:17.845910   31613 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:38:17.938774   31613 command_runner.go:130] > apiVersion: v1
	I0108 20:38:17.938801   31613 command_runner.go:130] > data:
	I0108 20:38:17.938808   31613 command_runner.go:130] >   Corefile: |
	I0108 20:38:17.938815   31613 command_runner.go:130] >     .:53 {
	I0108 20:38:17.938821   31613 command_runner.go:130] >         errors
	I0108 20:38:17.938829   31613 command_runner.go:130] >         health {
	I0108 20:38:17.938836   31613 command_runner.go:130] >            lameduck 5s
	I0108 20:38:17.938842   31613 command_runner.go:130] >         }
	I0108 20:38:17.938848   31613 command_runner.go:130] >         ready
	I0108 20:38:17.938858   31613 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 20:38:17.938868   31613 command_runner.go:130] >            pods insecure
	I0108 20:38:17.938881   31613 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 20:38:17.938890   31613 command_runner.go:130] >            ttl 30
	I0108 20:38:17.938900   31613 command_runner.go:130] >         }
	I0108 20:38:17.938906   31613 command_runner.go:130] >         prometheus :9153
	I0108 20:38:17.938917   31613 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 20:38:17.938927   31613 command_runner.go:130] >            max_concurrent 1000
	I0108 20:38:17.938934   31613 command_runner.go:130] >         }
	I0108 20:38:17.938942   31613 command_runner.go:130] >         cache 30
	I0108 20:38:17.938952   31613 command_runner.go:130] >         loop
	I0108 20:38:17.938961   31613 command_runner.go:130] >         reload
	I0108 20:38:17.938971   31613 command_runner.go:130] >         loadbalance
	I0108 20:38:17.938980   31613 command_runner.go:130] >     }
	I0108 20:38:17.938989   31613 command_runner.go:130] > kind: ConfigMap
	I0108 20:38:17.938998   31613 command_runner.go:130] > metadata:
	I0108 20:38:17.939011   31613 command_runner.go:130] >   creationTimestamp: "2024-01-08T20:38:05Z"
	I0108 20:38:17.939021   31613 command_runner.go:130] >   name: coredns
	I0108 20:38:17.939027   31613 command_runner.go:130] >   namespace: kube-system
	I0108 20:38:17.939036   31613 command_runner.go:130] >   resourceVersion: "229"
	I0108 20:38:17.939044   31613 command_runner.go:130] >   uid: d5a0581d-11a8-42c8-8842-c8e10f16d3a9
	I0108 20:38:17.940447   31613 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 20:38:17.972761   31613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 20:38:18.000725   31613 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 20:38:18.281344   31613 round_trippers.go:463] GET https://192.168.39.196:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:38:18.281367   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:18.281375   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:18.281381   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:18.291356   31613 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0108 20:38:18.291381   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:18.291388   31613 round_trippers.go:580]     Content-Length: 291
	I0108 20:38:18.291394   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:18 GMT
	I0108 20:38:18.291399   31613 round_trippers.go:580]     Audit-Id: 45e12df3-ed16-47ea-9e65-e7ba670434ed
	I0108 20:38:18.291404   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:18.291409   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:18.291432   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:18.291438   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:18.302924   31613 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a90ea09-afeb-4dda-ab10-18a22e37ea78","resourceVersion":"319","creationTimestamp":"2024-01-08T20:38:05Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0108 20:38:18.303063   31613 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-340815" context rescaled to 1 replicas
	I0108 20:38:18.303102   31613 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:38:18.306220   31613 out.go:177] * Verifying Kubernetes components...
	I0108 20:38:18.307973   31613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:38:18.879398   31613 command_runner.go:130] > configmap/coredns replaced
	I0108 20:38:18.879442   31613 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 20:38:19.161790   31613 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0108 20:38:19.161816   31613 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0108 20:38:19.161830   31613 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 20:38:19.161847   31613 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 20:38:19.161854   31613 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0108 20:38:19.161861   31613 command_runner.go:130] > pod/storage-provisioner created
	I0108 20:38:19.161889   31613 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0108 20:38:19.161928   31613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.161172668s)
	I0108 20:38:19.161971   31613 main.go:141] libmachine: Making call to close driver server
	I0108 20:38:19.161983   31613 main.go:141] libmachine: (multinode-340815) Calling .Close
	I0108 20:38:19.162010   31613 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.189208332s)
	I0108 20:38:19.162055   31613 main.go:141] libmachine: Making call to close driver server
	I0108 20:38:19.162070   31613 main.go:141] libmachine: (multinode-340815) Calling .Close
	I0108 20:38:19.162380   31613 main.go:141] libmachine: (multinode-340815) DBG | Closing plugin on server side
	I0108 20:38:19.162386   31613 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:38:19.162406   31613 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:38:19.162410   31613 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:38:19.162415   31613 main.go:141] libmachine: Making call to close driver server
	I0108 20:38:19.162426   31613 main.go:141] libmachine: (multinode-340815) Calling .Close
	I0108 20:38:19.162426   31613 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:38:19.162492   31613 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:38:19.162512   31613 main.go:141] libmachine: Making call to close driver server
	I0108 20:38:19.162527   31613 main.go:141] libmachine: (multinode-340815) Calling .Close
	I0108 20:38:19.162656   31613 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:38:19.162723   31613 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:38:19.162771   31613 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:38:19.162779   31613 main.go:141] libmachine: (multinode-340815) DBG | Closing plugin on server side
	I0108 20:38:19.162795   31613 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:38:19.162752   31613 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:38:19.162887   31613 round_trippers.go:463] GET https://192.168.39.196:8443/apis/storage.k8s.io/v1/storageclasses
	I0108 20:38:19.162899   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:19.162910   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:19.162924   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:19.163103   31613 node_ready.go:35] waiting up to 6m0s for node "multinode-340815" to be "Ready" ...
	I0108 20:38:19.163188   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:19.163200   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:19.163212   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:19.163224   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:19.173855   31613 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0108 20:38:19.173879   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:19.173886   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:19.173891   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:19.173897   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:19.173902   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:19 GMT
	I0108 20:38:19.173907   31613 round_trippers.go:580]     Audit-Id: 84ee1c91-5e6d-4f9e-b670-2539906167cc
	I0108 20:38:19.173912   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:19.174106   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"315","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 20:38:19.174528   31613 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0108 20:38:19.174543   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:19.174549   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:19.174555   31613 round_trippers.go:580]     Content-Length: 1273
	I0108 20:38:19.174561   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:19 GMT
	I0108 20:38:19.174565   31613 round_trippers.go:580]     Audit-Id: 570fdd95-c71b-44b1-8394-0302c85cde0c
	I0108 20:38:19.174571   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:19.174576   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:19.174581   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:19.174641   31613 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"372"},"items":[{"metadata":{"name":"standard","uid":"b6119657-42f5-417f-a496-c0949fc0022f","resourceVersion":"353","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 20:38:19.175003   31613 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6119657-42f5-417f-a496-c0949fc0022f","resourceVersion":"353","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 20:38:19.175054   31613 round_trippers.go:463] PUT https://192.168.39.196:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 20:38:19.175062   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:19.175070   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:19.175076   31613 round_trippers.go:473]     Content-Type: application/json
	I0108 20:38:19.175084   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:19.178565   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:19.178587   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:19.178594   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:19.178600   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:19.178605   31613 round_trippers.go:580]     Content-Length: 1220
	I0108 20:38:19.178609   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:19 GMT
	I0108 20:38:19.178615   31613 round_trippers.go:580]     Audit-Id: c60e3c57-97a4-4fba-81fe-a9b7d04b206c
	I0108 20:38:19.178620   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:19.178625   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:19.178650   31613 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"b6119657-42f5-417f-a496-c0949fc0022f","resourceVersion":"353","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 20:38:19.178769   31613 main.go:141] libmachine: Making call to close driver server
	I0108 20:38:19.178783   31613 main.go:141] libmachine: (multinode-340815) Calling .Close
	I0108 20:38:19.179058   31613 main.go:141] libmachine: Successfully made call to close driver server
	I0108 20:38:19.179102   31613 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 20:38:19.179071   31613 main.go:141] libmachine: (multinode-340815) DBG | Closing plugin on server side
	I0108 20:38:19.181202   31613 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 20:38:19.182567   31613 addons.go:508] enable addons completed in 1.403222187s: enabled=[storage-provisioner default-storageclass]
	I0108 20:38:19.663940   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:19.663965   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:19.663973   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:19.663980   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:19.668397   31613 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:38:19.668429   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:19.668438   31613 round_trippers.go:580]     Audit-Id: 2dd7a4bf-aad5-4336-a9e2-82615cb4e454
	I0108 20:38:19.668447   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:19.668454   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:19.668462   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:19.668471   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:19.668479   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:19 GMT
	I0108 20:38:19.668567   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"315","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 20:38:20.164239   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:20.164264   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:20.164272   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:20.164278   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:20.167057   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:20.167079   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:20.167086   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:20.167091   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:20.167097   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:20.167102   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:20 GMT
	I0108 20:38:20.167107   31613 round_trippers.go:580]     Audit-Id: 8955d1cd-037a-4610-b312-206bacb7437e
	I0108 20:38:20.167112   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:20.167259   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"315","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 20:38:20.663747   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:20.663773   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:20.663781   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:20.663788   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:20.667897   31613 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:38:20.667929   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:20.667940   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:20.667948   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:20.667956   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:20 GMT
	I0108 20:38:20.667964   31613 round_trippers.go:580]     Audit-Id: 3b1e466d-f3eb-4c40-ab2b-00ce55d14a68
	I0108 20:38:20.667972   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:20.667981   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:20.668066   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"315","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 20:38:21.163729   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:21.163755   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:21.163763   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:21.163779   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:21.166495   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:21.166518   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:21.166530   31613 round_trippers.go:580]     Audit-Id: 4fd32ad5-a5ad-449b-83b9-61533242e864
	I0108 20:38:21.166538   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:21.166548   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:21.166557   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:21.166566   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:21.166575   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:21 GMT
	I0108 20:38:21.166771   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"315","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 20:38:21.167172   31613 node_ready.go:58] node "multinode-340815" has status "Ready":"False"
	I0108 20:38:21.664249   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:21.664279   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:21.664290   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:21.664298   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:21.666861   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:21.666883   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:21.666890   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:21.666896   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:21.666901   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:21 GMT
	I0108 20:38:21.666909   31613 round_trippers.go:580]     Audit-Id: a5ebdfca-f989-4bb0-bc3f-b1467babd98e
	I0108 20:38:21.666918   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:21.666926   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:21.667096   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"315","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 20:38:22.163829   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:22.163863   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:22.163875   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:22.163885   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:22.166814   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:22.166841   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:22.166853   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:22.166861   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:22 GMT
	I0108 20:38:22.166868   31613 round_trippers.go:580]     Audit-Id: c8753e1e-7247-415f-8b71-f5c890c9e9fe
	I0108 20:38:22.166875   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:22.166890   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:22.166903   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:22.167165   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"315","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 20:38:22.663789   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:22.663838   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:22.663849   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:22.663857   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:22.671906   31613 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 20:38:22.671941   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:22.671952   31613 round_trippers.go:580]     Audit-Id: 9a309041-74d1-476f-b2eb-27250042696b
	I0108 20:38:22.671959   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:22.671967   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:22.671976   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:22.671983   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:22.671989   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:22 GMT
	I0108 20:38:22.673164   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"315","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 20:38:23.163550   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:23.163573   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:23.163581   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:23.163588   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:23.168547   31613 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:38:23.168574   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:23.168585   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:23 GMT
	I0108 20:38:23.168593   31613 round_trippers.go:580]     Audit-Id: 8c407352-44ed-43a6-a902-36c88a557456
	I0108 20:38:23.168603   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:23.168612   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:23.168620   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:23.168632   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:23.169231   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"315","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 20:38:23.169536   31613 node_ready.go:58] node "multinode-340815" has status "Ready":"False"
	I0108 20:38:23.663972   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:23.663995   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:23.664004   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:23.664010   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:23.667081   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:23.667108   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:23.667116   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:23.667122   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:23 GMT
	I0108 20:38:23.667127   31613 round_trippers.go:580]     Audit-Id: 1682b660-e0ec-4a6a-aa01-a05eb1e173a1
	I0108 20:38:23.667133   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:23.667141   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:23.667151   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:23.667452   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"315","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 20:38:24.164209   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:24.164251   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:24.164263   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:24.164273   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:24.166926   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:24.166943   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:24.166950   31613 round_trippers.go:580]     Audit-Id: 394ea660-71a9-4b87-99d4-5b76a155e358
	I0108 20:38:24.166956   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:24.166961   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:24.166966   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:24.166974   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:24.166979   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:24 GMT
	I0108 20:38:24.167387   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"315","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0108 20:38:24.664141   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:24.664181   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:24.664192   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:24.664199   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:24.667068   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:24.667088   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:24.667094   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:24.667101   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:24 GMT
	I0108 20:38:24.667109   31613 round_trippers.go:580]     Audit-Id: de843ec6-789a-4f2f-ae3c-45f41a643897
	I0108 20:38:24.667118   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:24.667126   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:24.667135   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:24.667421   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:38:24.667713   31613 node_ready.go:49] node "multinode-340815" has status "Ready":"True"
	I0108 20:38:24.667729   31613 node_ready.go:38] duration metric: took 5.504598521s waiting for node "multinode-340815" to be "Ready" ...
	I0108 20:38:24.667737   31613 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:38:24.667789   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:38:24.667798   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:24.667805   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:24.667810   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:24.671487   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:24.671516   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:24.671527   31613 round_trippers.go:580]     Audit-Id: 38c7d049-6faa-42c5-8465-815dd342f92e
	I0108 20:38:24.671533   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:24.671538   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:24.671543   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:24.671549   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:24.671554   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:24 GMT
	I0108 20:38:24.672690   31613 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"394"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"394","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54818 chars]
	I0108 20:38:24.675660   31613 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:24.675738   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:38:24.675744   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:24.675755   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:24.675766   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:24.679229   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:24.679247   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:24.679254   31613 round_trippers.go:580]     Audit-Id: 037a1fc5-2d91-48e5-9914-7eb1c3c5ca8c
	I0108 20:38:24.679261   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:24.679267   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:24.679272   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:24.679277   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:24.679282   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:24 GMT
	I0108 20:38:24.679867   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"394","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0108 20:38:24.680298   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:24.680313   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:24.680320   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:24.680326   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:24.683970   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:24.683986   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:24.683993   31613 round_trippers.go:580]     Audit-Id: 59316d93-cd65-49c0-8d65-be040e05f056
	I0108 20:38:24.683998   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:24.684003   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:24.684009   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:24.684015   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:24.684020   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:24 GMT
	I0108 20:38:24.684208   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:38:25.176611   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:38:25.176634   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:25.176642   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:25.176648   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:25.180464   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:25.180492   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:25.180504   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:25.180512   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:25 GMT
	I0108 20:38:25.180520   31613 round_trippers.go:580]     Audit-Id: 9834e3ae-938c-4ecc-8b51-a236ac97096d
	I0108 20:38:25.180528   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:25.180537   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:25.180546   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:25.180846   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"394","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0108 20:38:25.181263   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:25.181278   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:25.181285   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:25.181291   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:25.185770   31613 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:38:25.185790   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:25.185797   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:25.185802   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:25.185808   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:25.185813   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:25.185818   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:25 GMT
	I0108 20:38:25.185823   31613 round_trippers.go:580]     Audit-Id: 9fe6d55f-c534-4a9b-8c97-755660662968
	I0108 20:38:25.185973   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:38:25.676747   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:38:25.676787   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:25.676796   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:25.676802   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:25.679741   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:25.679767   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:25.679778   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:25.679786   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:25.679793   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:25.679801   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:25.679809   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:25 GMT
	I0108 20:38:25.679817   31613 round_trippers.go:580]     Audit-Id: 5db32226-6d3c-478c-8ce9-9aa2c2be0cbd
	I0108 20:38:25.679910   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"394","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0108 20:38:25.680349   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:25.680366   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:25.680373   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:25.680379   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:25.682717   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:25.682736   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:25.682743   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:25.682749   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:25.682754   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:25 GMT
	I0108 20:38:25.682758   31613 round_trippers.go:580]     Audit-Id: 40ae7855-745f-4f79-ac4a-f06a167b5f94
	I0108 20:38:25.682764   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:25.682777   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:25.682922   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:38:26.176081   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:38:26.176113   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.176121   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.176127   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.179191   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:26.179215   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.179225   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.179234   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.179242   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.179251   31613 round_trippers.go:580]     Audit-Id: 7d7f86d2-9696-43c3-bf46-bdaecf454d9c
	I0108 20:38:26.179259   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.179272   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.179797   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"408","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0108 20:38:26.180249   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:26.180263   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.180270   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.180278   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.182748   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:26.182768   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.182776   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.182782   31613 round_trippers.go:580]     Audit-Id: 03f20e67-e015-4ec7-b0b4-ca330b2e15ba
	I0108 20:38:26.182787   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.182792   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.182797   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.182802   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.182973   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:38:26.183269   31613 pod_ready.go:92] pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace has status "Ready":"True"
	I0108 20:38:26.183284   31613 pod_ready.go:81] duration metric: took 1.507600233s waiting for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:26.183293   31613 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:26.183341   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-340815
	I0108 20:38:26.183348   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.183355   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.183361   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.186045   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:26.186066   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.186075   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.186080   31613 round_trippers.go:580]     Audit-Id: b0cd31c3-49c8-41ce-8676-999ba66d49de
	I0108 20:38:26.186085   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.186090   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.186095   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.186100   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.186381   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-340815","namespace":"kube-system","uid":"c6d1e2c4-6dbc-4495-ac68-c4b030195c2c","resourceVersion":"404","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.196:2379","kubernetes.io/config.hash":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.mirror":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.seen":"2024-01-08T20:38:05.870869333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0108 20:38:26.186732   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:26.186744   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.186751   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.186757   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.190056   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:26.190076   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.190082   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.190088   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.190093   31613 round_trippers.go:580]     Audit-Id: a0d94647-4269-40dd-95db-fb7e6b24cfca
	I0108 20:38:26.190098   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.190106   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.190115   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.190396   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:38:26.190682   31613 pod_ready.go:92] pod "etcd-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:38:26.190697   31613 pod_ready.go:81] duration metric: took 7.397045ms waiting for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:26.190708   31613 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:26.190759   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-340815
	I0108 20:38:26.190766   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.190773   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.190779   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.194587   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:26.194609   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.194616   31613 round_trippers.go:580]     Audit-Id: 1f2f0f70-98d0-4ca9-90da-e677d84fcafb
	I0108 20:38:26.194622   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.194627   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.194632   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.194645   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.194650   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.195807   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-340815","namespace":"kube-system","uid":"523b3dcf-2fae-43b4-a9c6-cd2337ae6d6f","resourceVersion":"405","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.196:8443","kubernetes.io/config.hash":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.mirror":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.seen":"2024-01-08T20:38:05.870870627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0108 20:38:26.196210   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:26.196223   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.196230   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.196236   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.199945   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:26.199963   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.199972   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.199978   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.199984   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.199989   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.199994   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.200000   31613 round_trippers.go:580]     Audit-Id: b3cedc9b-7a0d-467c-bd29-f483b332718a
	I0108 20:38:26.200680   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:38:26.200949   31613 pod_ready.go:92] pod "kube-apiserver-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:38:26.200962   31613 pod_ready.go:81] duration metric: took 10.248477ms waiting for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:26.200972   31613 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:26.201024   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-340815
	I0108 20:38:26.201032   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.201038   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.201044   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.205490   31613 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:38:26.205509   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.205516   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.205521   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.205526   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.205531   31613 round_trippers.go:580]     Audit-Id: 53ec460e-faf7-46f7-9281-b883d2e2267e
	I0108 20:38:26.205536   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.205542   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.205702   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-340815","namespace":"kube-system","uid":"3b29ca3f-d23b-4add-a5fb-d59381398862","resourceVersion":"406","creationTimestamp":"2024-01-08T20:38:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.mirror":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.seen":"2024-01-08T20:37:56.785419514Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0108 20:38:26.206064   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:26.206076   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.206082   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.206088   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.210047   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:26.210066   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.210072   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.210078   31613 round_trippers.go:580]     Audit-Id: 9bb14436-a117-41e3-8658-232e49c24381
	I0108 20:38:26.210083   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.210088   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.210093   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.210101   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.210702   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:38:26.210974   31613 pod_ready.go:92] pod "kube-controller-manager-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:38:26.210988   31613 pod_ready.go:81] duration metric: took 10.010246ms waiting for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:26.210997   31613 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:26.211041   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9xrv
	I0108 20:38:26.211048   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.211054   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.211060   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.213167   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:26.213185   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.213192   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.213198   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.213205   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.213213   31613 round_trippers.go:580]     Audit-Id: a949f8ab-ba01-454b-a237-12e90ffbbbc6
	I0108 20:38:26.213222   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.213231   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.213381   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z9xrv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a0843325-2adf-4c2f-8489-067554648b52","resourceVersion":"377","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 20:38:26.265007   31613 request.go:629] Waited for 51.243437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:26.265064   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:26.265069   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.265076   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.265082   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.267729   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:26.267750   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.267757   31613 round_trippers.go:580]     Audit-Id: c00bf50e-d6f1-4975-b884-047b4474d2ea
	I0108 20:38:26.267763   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.267770   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.267775   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.267783   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.267788   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.267938   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:38:26.268254   31613 pod_ready.go:92] pod "kube-proxy-z9xrv" in "kube-system" namespace has status "Ready":"True"
	I0108 20:38:26.268270   31613 pod_ready.go:81] duration metric: took 57.267368ms waiting for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:26.268279   31613 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:26.464728   31613 request.go:629] Waited for 196.386868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:38:26.464809   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:38:26.464817   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.464825   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.464831   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.467593   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:26.467613   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.467623   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.467631   31613 round_trippers.go:580]     Audit-Id: 671c8c4e-fbda-4619-a5da-325917667fe9
	I0108 20:38:26.467639   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.467646   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.467654   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.467662   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.467971   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-340815","namespace":"kube-system","uid":"008c4fe8-78b1-4326-8452-215037af26d6","resourceVersion":"403","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.mirror":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.seen":"2024-01-08T20:38:05.870865233Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0108 20:38:26.664756   31613 request.go:629] Waited for 196.37327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:26.664849   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:38:26.664857   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.664870   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.664884   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.667668   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:26.667692   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.667699   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.667705   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.667710   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.667715   31613 round_trippers.go:580]     Audit-Id: 24432df4-be7d-4d95-a43d-9b36aa03e8a5
	I0108 20:38:26.667720   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.667725   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.668016   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:38:26.668406   31613 pod_ready.go:92] pod "kube-scheduler-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:38:26.668429   31613 pod_ready.go:81] duration metric: took 400.144801ms waiting for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:38:26.668439   31613 pod_ready.go:38] duration metric: took 2.000694734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:38:26.668458   31613 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:38:26.668501   31613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:38:26.682955   31613 command_runner.go:130] > 1082
	I0108 20:38:26.683007   31613 api_server.go:72] duration metric: took 8.379865049s to wait for apiserver process to appear ...
	I0108 20:38:26.683019   31613 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:38:26.683035   31613 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0108 20:38:26.688777   31613 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0108 20:38:26.688863   31613 round_trippers.go:463] GET https://192.168.39.196:8443/version
	I0108 20:38:26.688875   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.688887   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.688899   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.689879   31613 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0108 20:38:26.689897   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.689907   31613 round_trippers.go:580]     Content-Length: 264
	I0108 20:38:26.689916   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.689926   31613 round_trippers.go:580]     Audit-Id: 4c268c91-1b53-4c01-b5bf-7aeb2823da72
	I0108 20:38:26.689939   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.689951   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.689964   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.689976   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.690005   31613 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 20:38:26.690092   31613 api_server.go:141] control plane version: v1.28.4
	I0108 20:38:26.690112   31613 api_server.go:131] duration metric: took 7.086457ms to wait for apiserver health ...
	I0108 20:38:26.690121   31613 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:38:26.864537   31613 request.go:629] Waited for 174.353236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:38:26.864605   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:38:26.864610   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:26.864617   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:26.864623   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:26.868184   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:26.868216   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:26.868225   31613 round_trippers.go:580]     Audit-Id: ceff4830-2114-4896-8041-4ffdc1a2350e
	I0108 20:38:26.868233   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:26.868242   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:26.868251   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:26.868263   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:26.868272   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:26 GMT
	I0108 20:38:26.869411   31613 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"408","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I0108 20:38:26.871157   31613 system_pods.go:59] 8 kube-system pods found
	I0108 20:38:26.871178   31613 system_pods.go:61] "coredns-5dd5756b68-h4v6v" [5c1ccbb8-1747-4b6f-b40c-c54670e49d54] Running
	I0108 20:38:26.871183   31613 system_pods.go:61] "etcd-multinode-340815" [c6d1e2c4-6dbc-4495-ac68-c4b030195c2c] Running
	I0108 20:38:26.871187   31613 system_pods.go:61] "kindnet-h48qs" [65d532d3-b3ca-493d-b287-1b03dbdad538] Running
	I0108 20:38:26.871192   31613 system_pods.go:61] "kube-apiserver-multinode-340815" [523b3dcf-2fae-43b4-a9c6-cd2337ae6d6f] Running
	I0108 20:38:26.871197   31613 system_pods.go:61] "kube-controller-manager-multinode-340815" [3b29ca3f-d23b-4add-a5fb-d59381398862] Running
	I0108 20:38:26.871200   31613 system_pods.go:61] "kube-proxy-z9xrv" [a0843325-2adf-4c2f-8489-067554648b52] Running
	I0108 20:38:26.871204   31613 system_pods.go:61] "kube-scheduler-multinode-340815" [008c4fe8-78b1-4326-8452-215037af26d6] Running
	I0108 20:38:26.871208   31613 system_pods.go:61] "storage-provisioner" [de357297-4bd9-4c71-ada5-ceace0d38cfb] Running
	I0108 20:38:26.871213   31613 system_pods.go:74] duration metric: took 181.08702ms to wait for pod list to return data ...
	I0108 20:38:26.871221   31613 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:38:27.064670   31613 request.go:629] Waited for 193.392162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:38:27.064774   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:38:27.064783   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:27.064796   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:27.064811   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:27.067523   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:38:27.067543   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:27.067549   31613 round_trippers.go:580]     Audit-Id: 59616623-6834-4ead-87f2-a930c9648fd0
	I0108 20:38:27.067555   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:27.067560   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:27.067565   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:27.067570   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:27.067576   31613 round_trippers.go:580]     Content-Length: 261
	I0108 20:38:27.067581   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:27 GMT
	I0108 20:38:27.067601   31613 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"760bcece-5b51-45a3-9d4c-77490cf0e377","resourceVersion":"295","creationTimestamp":"2024-01-08T20:38:17Z"}}]}
	I0108 20:38:27.067809   31613 default_sa.go:45] found service account: "default"
	I0108 20:38:27.067826   31613 default_sa.go:55] duration metric: took 196.600244ms for default service account to be created ...
	I0108 20:38:27.067833   31613 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:38:27.265087   31613 request.go:629] Waited for 197.201266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:38:27.265158   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:38:27.265165   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:27.265174   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:27.265184   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:27.269269   31613 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:38:27.269301   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:27.269322   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:27.269331   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:27.269339   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:27.269347   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:27.269355   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:27 GMT
	I0108 20:38:27.269363   31613 round_trippers.go:580]     Audit-Id: 4bfeaae9-c3c6-4c07-80f7-4c2b54ac1d90
	I0108 20:38:27.270571   31613 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"408","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I0108 20:38:27.272221   31613 system_pods.go:86] 8 kube-system pods found
	I0108 20:38:27.272245   31613 system_pods.go:89] "coredns-5dd5756b68-h4v6v" [5c1ccbb8-1747-4b6f-b40c-c54670e49d54] Running
	I0108 20:38:27.272252   31613 system_pods.go:89] "etcd-multinode-340815" [c6d1e2c4-6dbc-4495-ac68-c4b030195c2c] Running
	I0108 20:38:27.272258   31613 system_pods.go:89] "kindnet-h48qs" [65d532d3-b3ca-493d-b287-1b03dbdad538] Running
	I0108 20:38:27.272265   31613 system_pods.go:89] "kube-apiserver-multinode-340815" [523b3dcf-2fae-43b4-a9c6-cd2337ae6d6f] Running
	I0108 20:38:27.272271   31613 system_pods.go:89] "kube-controller-manager-multinode-340815" [3b29ca3f-d23b-4add-a5fb-d59381398862] Running
	I0108 20:38:27.272279   31613 system_pods.go:89] "kube-proxy-z9xrv" [a0843325-2adf-4c2f-8489-067554648b52] Running
	I0108 20:38:27.272286   31613 system_pods.go:89] "kube-scheduler-multinode-340815" [008c4fe8-78b1-4326-8452-215037af26d6] Running
	I0108 20:38:27.272294   31613 system_pods.go:89] "storage-provisioner" [de357297-4bd9-4c71-ada5-ceace0d38cfb] Running
	I0108 20:38:27.272304   31613 system_pods.go:126] duration metric: took 204.464889ms to wait for k8s-apps to be running ...
	I0108 20:38:27.272314   31613 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:38:27.272360   31613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:38:27.286591   31613 system_svc.go:56] duration metric: took 14.268779ms WaitForService to wait for kubelet.
	I0108 20:38:27.286621   31613 kubeadm.go:581] duration metric: took 8.98348245s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:38:27.286646   31613 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:38:27.465099   31613 request.go:629] Waited for 178.363951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0108 20:38:27.465155   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0108 20:38:27.465161   31613 round_trippers.go:469] Request Headers:
	I0108 20:38:27.465168   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:38:27.465174   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:38:27.468430   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:38:27.468455   31613 round_trippers.go:577] Response Headers:
	I0108 20:38:27.468468   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:38:27 GMT
	I0108 20:38:27.468475   31613 round_trippers.go:580]     Audit-Id: 9f40bd33-fe79-4517-a5fe-e7aa7d0cbb37
	I0108 20:38:27.468483   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:38:27.468490   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:38:27.468497   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:38:27.468504   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:38:27.468685   31613 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I0108 20:38:27.469176   31613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:38:27.469218   31613 node_conditions.go:123] node cpu capacity is 2
	I0108 20:38:27.469231   31613 node_conditions.go:105] duration metric: took 182.579311ms to run NodePressure ...
	I0108 20:38:27.469244   31613 start.go:228] waiting for startup goroutines ...
	I0108 20:38:27.469252   31613 start.go:233] waiting for cluster config update ...
	I0108 20:38:27.469265   31613 start.go:242] writing updated cluster config ...
	I0108 20:38:27.472084   31613 out.go:177] 
	I0108 20:38:27.474067   31613 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:38:27.474133   31613 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json ...
	I0108 20:38:27.476123   31613 out.go:177] * Starting worker node multinode-340815-m02 in cluster multinode-340815
	I0108 20:38:27.477463   31613 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:38:27.477483   31613 cache.go:56] Caching tarball of preloaded images
	I0108 20:38:27.477559   31613 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 20:38:27.477570   31613 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:38:27.477643   31613 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json ...
	I0108 20:38:27.477837   31613 start.go:365] acquiring machines lock for multinode-340815-m02: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 20:38:27.477891   31613 start.go:369] acquired machines lock for "multinode-340815-m02" in 29.047µs
	I0108 20:38:27.477915   31613 start.go:93] Provisioning new machine with config: &{Name:multinode-340815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:38:27.477983   31613 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0108 20:38:27.479704   31613 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 20:38:27.479774   31613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:38:27.479795   31613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:38:27.493988   31613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
	I0108 20:38:27.494400   31613 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:38:27.494853   31613 main.go:141] libmachine: Using API Version  1
	I0108 20:38:27.494883   31613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:38:27.495210   31613 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:38:27.495400   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetMachineName
	I0108 20:38:27.495542   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:38:27.495686   31613 start.go:159] libmachine.API.Create for "multinode-340815" (driver="kvm2")
	I0108 20:38:27.495707   31613 client.go:168] LocalClient.Create starting
	I0108 20:38:27.495737   31613 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem
	I0108 20:38:27.495779   31613 main.go:141] libmachine: Decoding PEM data...
	I0108 20:38:27.495800   31613 main.go:141] libmachine: Parsing certificate...
	I0108 20:38:27.495852   31613 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem
	I0108 20:38:27.495873   31613 main.go:141] libmachine: Decoding PEM data...
	I0108 20:38:27.495884   31613 main.go:141] libmachine: Parsing certificate...
	I0108 20:38:27.495903   31613 main.go:141] libmachine: Running pre-create checks...
	I0108 20:38:27.495912   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .PreCreateCheck
	I0108 20:38:27.496071   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetConfigRaw
	I0108 20:38:27.496463   31613 main.go:141] libmachine: Creating machine...
	I0108 20:38:27.496477   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .Create
	I0108 20:38:27.496614   31613 main.go:141] libmachine: (multinode-340815-m02) Creating KVM machine...
	I0108 20:38:27.497926   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found existing default KVM network
	I0108 20:38:27.498015   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found existing private KVM network mk-multinode-340815
	I0108 20:38:27.498194   31613 main.go:141] libmachine: (multinode-340815-m02) Setting up store path in /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02 ...
	I0108 20:38:27.498217   31613 main.go:141] libmachine: (multinode-340815-m02) Building disk image from file:///home/jenkins/minikube-integration/17907-10702/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 20:38:27.498291   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:27.498192   31980 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:38:27.498357   31613 main.go:141] libmachine: (multinode-340815-m02) Downloading /home/jenkins/minikube-integration/17907-10702/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17907-10702/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 20:38:27.704678   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:27.704566   31980 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa...
	I0108 20:38:27.817312   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:27.817169   31980 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/multinode-340815-m02.rawdisk...
	I0108 20:38:27.817349   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Writing magic tar header
	I0108 20:38:27.817376   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Writing SSH key tar header
	I0108 20:38:27.817389   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:27.817279   31980 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02 ...
	I0108 20:38:27.817408   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02
	I0108 20:38:27.817418   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube/machines
	I0108 20:38:27.817430   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:38:27.817455   31613 main.go:141] libmachine: (multinode-340815-m02) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02 (perms=drwx------)
	I0108 20:38:27.817472   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702
	I0108 20:38:27.817495   31613 main.go:141] libmachine: (multinode-340815-m02) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube/machines (perms=drwxr-xr-x)
	I0108 20:38:27.817507   31613 main.go:141] libmachine: (multinode-340815-m02) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube (perms=drwxr-xr-x)
	I0108 20:38:27.817514   31613 main.go:141] libmachine: (multinode-340815-m02) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702 (perms=drwxrwxr-x)
	I0108 20:38:27.817525   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 20:38:27.817540   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Checking permissions on dir: /home/jenkins
	I0108 20:38:27.817554   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Checking permissions on dir: /home
	I0108 20:38:27.817568   31613 main.go:141] libmachine: (multinode-340815-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 20:38:27.817582   31613 main.go:141] libmachine: (multinode-340815-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 20:38:27.817590   31613 main.go:141] libmachine: (multinode-340815-m02) Creating domain...
	I0108 20:38:27.817602   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Skipping /home - not owner
	I0108 20:38:27.818473   31613 main.go:141] libmachine: (multinode-340815-m02) define libvirt domain using xml: 
	I0108 20:38:27.818492   31613 main.go:141] libmachine: (multinode-340815-m02) <domain type='kvm'>
	I0108 20:38:27.818504   31613 main.go:141] libmachine: (multinode-340815-m02)   <name>multinode-340815-m02</name>
	I0108 20:38:27.818523   31613 main.go:141] libmachine: (multinode-340815-m02)   <memory unit='MiB'>2200</memory>
	I0108 20:38:27.818531   31613 main.go:141] libmachine: (multinode-340815-m02)   <vcpu>2</vcpu>
	I0108 20:38:27.818539   31613 main.go:141] libmachine: (multinode-340815-m02)   <features>
	I0108 20:38:27.818545   31613 main.go:141] libmachine: (multinode-340815-m02)     <acpi/>
	I0108 20:38:27.818553   31613 main.go:141] libmachine: (multinode-340815-m02)     <apic/>
	I0108 20:38:27.818559   31613 main.go:141] libmachine: (multinode-340815-m02)     <pae/>
	I0108 20:38:27.818569   31613 main.go:141] libmachine: (multinode-340815-m02)     
	I0108 20:38:27.818580   31613 main.go:141] libmachine: (multinode-340815-m02)   </features>
	I0108 20:38:27.818597   31613 main.go:141] libmachine: (multinode-340815-m02)   <cpu mode='host-passthrough'>
	I0108 20:38:27.818610   31613 main.go:141] libmachine: (multinode-340815-m02)   
	I0108 20:38:27.818619   31613 main.go:141] libmachine: (multinode-340815-m02)   </cpu>
	I0108 20:38:27.818625   31613 main.go:141] libmachine: (multinode-340815-m02)   <os>
	I0108 20:38:27.818633   31613 main.go:141] libmachine: (multinode-340815-m02)     <type>hvm</type>
	I0108 20:38:27.818639   31613 main.go:141] libmachine: (multinode-340815-m02)     <boot dev='cdrom'/>
	I0108 20:38:27.818647   31613 main.go:141] libmachine: (multinode-340815-m02)     <boot dev='hd'/>
	I0108 20:38:27.818658   31613 main.go:141] libmachine: (multinode-340815-m02)     <bootmenu enable='no'/>
	I0108 20:38:27.818669   31613 main.go:141] libmachine: (multinode-340815-m02)   </os>
	I0108 20:38:27.818682   31613 main.go:141] libmachine: (multinode-340815-m02)   <devices>
	I0108 20:38:27.818703   31613 main.go:141] libmachine: (multinode-340815-m02)     <disk type='file' device='cdrom'>
	I0108 20:38:27.818722   31613 main.go:141] libmachine: (multinode-340815-m02)       <source file='/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/boot2docker.iso'/>
	I0108 20:38:27.818731   31613 main.go:141] libmachine: (multinode-340815-m02)       <target dev='hdc' bus='scsi'/>
	I0108 20:38:27.818740   31613 main.go:141] libmachine: (multinode-340815-m02)       <readonly/>
	I0108 20:38:27.818779   31613 main.go:141] libmachine: (multinode-340815-m02)     </disk>
	I0108 20:38:27.818808   31613 main.go:141] libmachine: (multinode-340815-m02)     <disk type='file' device='disk'>
	I0108 20:38:27.818827   31613 main.go:141] libmachine: (multinode-340815-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 20:38:27.818846   31613 main.go:141] libmachine: (multinode-340815-m02)       <source file='/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/multinode-340815-m02.rawdisk'/>
	I0108 20:38:27.818862   31613 main.go:141] libmachine: (multinode-340815-m02)       <target dev='hda' bus='virtio'/>
	I0108 20:38:27.818871   31613 main.go:141] libmachine: (multinode-340815-m02)     </disk>
	I0108 20:38:27.818883   31613 main.go:141] libmachine: (multinode-340815-m02)     <interface type='network'>
	I0108 20:38:27.818903   31613 main.go:141] libmachine: (multinode-340815-m02)       <source network='mk-multinode-340815'/>
	I0108 20:38:27.818918   31613 main.go:141] libmachine: (multinode-340815-m02)       <model type='virtio'/>
	I0108 20:38:27.818935   31613 main.go:141] libmachine: (multinode-340815-m02)     </interface>
	I0108 20:38:27.818950   31613 main.go:141] libmachine: (multinode-340815-m02)     <interface type='network'>
	I0108 20:38:27.818962   31613 main.go:141] libmachine: (multinode-340815-m02)       <source network='default'/>
	I0108 20:38:27.818974   31613 main.go:141] libmachine: (multinode-340815-m02)       <model type='virtio'/>
	I0108 20:38:27.818987   31613 main.go:141] libmachine: (multinode-340815-m02)     </interface>
	I0108 20:38:27.818996   31613 main.go:141] libmachine: (multinode-340815-m02)     <serial type='pty'>
	I0108 20:38:27.819002   31613 main.go:141] libmachine: (multinode-340815-m02)       <target port='0'/>
	I0108 20:38:27.819012   31613 main.go:141] libmachine: (multinode-340815-m02)     </serial>
	I0108 20:38:27.819028   31613 main.go:141] libmachine: (multinode-340815-m02)     <console type='pty'>
	I0108 20:38:27.819042   31613 main.go:141] libmachine: (multinode-340815-m02)       <target type='serial' port='0'/>
	I0108 20:38:27.819052   31613 main.go:141] libmachine: (multinode-340815-m02)     </console>
	I0108 20:38:27.819062   31613 main.go:141] libmachine: (multinode-340815-m02)     <rng model='virtio'>
	I0108 20:38:27.819077   31613 main.go:141] libmachine: (multinode-340815-m02)       <backend model='random'>/dev/random</backend>
	I0108 20:38:27.819089   31613 main.go:141] libmachine: (multinode-340815-m02)     </rng>
	I0108 20:38:27.819107   31613 main.go:141] libmachine: (multinode-340815-m02)     
	I0108 20:38:27.819126   31613 main.go:141] libmachine: (multinode-340815-m02)     
	I0108 20:38:27.819140   31613 main.go:141] libmachine: (multinode-340815-m02)   </devices>
	I0108 20:38:27.819151   31613 main.go:141] libmachine: (multinode-340815-m02) </domain>
	I0108 20:38:27.819165   31613 main.go:141] libmachine: (multinode-340815-m02) 
	I0108 20:38:27.826333   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:48:63:d3 in network default
	I0108 20:38:27.826837   31613 main.go:141] libmachine: (multinode-340815-m02) Ensuring networks are active...
	I0108 20:38:27.826861   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:27.827717   31613 main.go:141] libmachine: (multinode-340815-m02) Ensuring network default is active
	I0108 20:38:27.828024   31613 main.go:141] libmachine: (multinode-340815-m02) Ensuring network mk-multinode-340815 is active
	I0108 20:38:27.828479   31613 main.go:141] libmachine: (multinode-340815-m02) Getting domain xml...
	I0108 20:38:27.829288   31613 main.go:141] libmachine: (multinode-340815-m02) Creating domain...
	I0108 20:38:29.065966   31613 main.go:141] libmachine: (multinode-340815-m02) Waiting to get IP...
	I0108 20:38:29.066722   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:29.067122   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:29.067146   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:29.067107   31980 retry.go:31] will retry after 237.512233ms: waiting for machine to come up
	I0108 20:38:29.306603   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:29.306991   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:29.307022   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:29.306941   31980 retry.go:31] will retry after 269.156017ms: waiting for machine to come up
	I0108 20:38:29.577299   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:29.577758   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:29.577790   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:29.577700   31980 retry.go:31] will retry after 424.79407ms: waiting for machine to come up
	I0108 20:38:30.004492   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:30.004908   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:30.004938   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:30.004850   31980 retry.go:31] will retry after 459.293162ms: waiting for machine to come up
	I0108 20:38:30.465299   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:30.465752   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:30.465785   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:30.465666   31980 retry.go:31] will retry after 459.026454ms: waiting for machine to come up
	I0108 20:38:30.926522   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:30.927006   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:30.927054   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:30.926949   31980 retry.go:31] will retry after 892.301249ms: waiting for machine to come up
	I0108 20:38:31.821084   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:31.821520   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:31.821540   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:31.821480   31980 retry.go:31] will retry after 830.017993ms: waiting for machine to come up
	I0108 20:38:32.653250   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:32.653635   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:32.653668   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:32.653582   31980 retry.go:31] will retry after 1.326594722s: waiting for machine to come up
	I0108 20:38:33.981327   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:33.981664   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:33.981697   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:33.981622   31980 retry.go:31] will retry after 1.509974025s: waiting for machine to come up
	I0108 20:38:35.493334   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:35.493847   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:35.493880   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:35.493790   31980 retry.go:31] will retry after 1.514359492s: waiting for machine to come up
	I0108 20:38:37.009949   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:37.010317   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:37.010345   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:37.010275   31980 retry.go:31] will retry after 2.120836065s: waiting for machine to come up
	I0108 20:38:39.132556   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:39.132973   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:39.133005   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:39.132914   31980 retry.go:31] will retry after 3.409086286s: waiting for machine to come up
	I0108 20:38:42.543653   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:42.544040   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:42.544064   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:42.544003   31980 retry.go:31] will retry after 3.383769596s: waiting for machine to come up
	I0108 20:38:45.931482   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:45.932107   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find current IP address of domain multinode-340815-m02 in network mk-multinode-340815
	I0108 20:38:45.932140   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | I0108 20:38:45.932015   31980 retry.go:31] will retry after 4.473041515s: waiting for machine to come up
	I0108 20:38:50.408958   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:50.409506   31613 main.go:141] libmachine: (multinode-340815-m02) Found IP for machine: 192.168.39.78
	I0108 20:38:50.409541   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has current primary IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:50.409552   31613 main.go:141] libmachine: (multinode-340815-m02) Reserving static IP address...
	I0108 20:38:50.409947   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | unable to find host DHCP lease matching {name: "multinode-340815-m02", mac: "52:54:00:85:58:8d", ip: "192.168.39.78"} in network mk-multinode-340815
	I0108 20:38:50.485209   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Getting to WaitForSSH function...
	I0108 20:38:50.485245   31613 main.go:141] libmachine: (multinode-340815-m02) Reserved static IP address: 192.168.39.78
	I0108 20:38:50.485286   31613 main.go:141] libmachine: (multinode-340815-m02) Waiting for SSH to be available...
	I0108 20:38:50.488083   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:50.488476   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:minikube Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:50.488510   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:50.488641   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Using SSH client type: external
	I0108 20:38:50.488672   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa (-rw-------)
	I0108 20:38:50.488706   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 20:38:50.488726   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | About to run SSH command:
	I0108 20:38:50.488747   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | exit 0
	I0108 20:38:50.584225   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | SSH cmd err, output: <nil>: 
	I0108 20:38:50.584521   31613 main.go:141] libmachine: (multinode-340815-m02) KVM machine creation complete!
	I0108 20:38:50.584758   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetConfigRaw
	I0108 20:38:50.585370   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:38:50.585598   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:38:50.585794   31613 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 20:38:50.585815   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetState
	I0108 20:38:50.587085   31613 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 20:38:50.587102   31613 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 20:38:50.587109   31613 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 20:38:50.587115   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:38:50.589418   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:50.589708   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:50.589740   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:50.589848   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:38:50.590055   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:50.590192   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:50.590369   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:38:50.590552   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:38:50.590886   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0108 20:38:50.590899   31613 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 20:38:50.719748   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:38:50.719771   31613 main.go:141] libmachine: Detecting the provisioner...
	I0108 20:38:50.719779   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:38:50.722585   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:50.723033   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:50.723074   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:50.723314   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:38:50.723523   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:50.723798   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:50.723959   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:38:50.724119   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:38:50.724486   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0108 20:38:50.724499   31613 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 20:38:50.856937   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 20:38:50.857031   31613 main.go:141] libmachine: found compatible host: buildroot
	I0108 20:38:50.857046   31613 main.go:141] libmachine: Provisioning with buildroot...
	I0108 20:38:50.857058   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetMachineName
	I0108 20:38:50.857349   31613 buildroot.go:166] provisioning hostname "multinode-340815-m02"
	I0108 20:38:50.857371   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetMachineName
	I0108 20:38:50.857537   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:38:50.860378   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:50.860741   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:50.860764   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:50.860871   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:38:50.861035   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:50.861202   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:50.861357   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:38:50.861513   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:38:50.861821   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0108 20:38:50.861833   31613 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-340815-m02 && echo "multinode-340815-m02" | sudo tee /etc/hostname
	I0108 20:38:51.002792   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-340815-m02
	
	I0108 20:38:51.002822   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:38:51.005596   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.005957   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:51.005993   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.006180   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:38:51.006373   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:51.006564   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:51.006725   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:38:51.006881   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:38:51.007247   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0108 20:38:51.007267   31613 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-340815-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-340815-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-340815-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:38:51.140782   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:38:51.140817   31613 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 20:38:51.140837   31613 buildroot.go:174] setting up certificates
	I0108 20:38:51.140849   31613 provision.go:83] configureAuth start
	I0108 20:38:51.140862   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetMachineName
	I0108 20:38:51.141127   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetIP
	I0108 20:38:51.143899   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.144339   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:51.144374   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.144554   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:38:51.146914   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.147271   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:51.147310   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.147510   31613 provision.go:138] copyHostCerts
	I0108 20:38:51.147541   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:38:51.147578   31613 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 20:38:51.147591   31613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:38:51.147670   31613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 20:38:51.147777   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:38:51.147799   31613 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 20:38:51.147806   31613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:38:51.147836   31613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 20:38:51.147887   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:38:51.147903   31613 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 20:38:51.147910   31613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:38:51.147931   31613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 20:38:51.147979   31613 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.multinode-340815-m02 san=[192.168.39.78 192.168.39.78 localhost 127.0.0.1 minikube multinode-340815-m02]
	I0108 20:38:51.390941   31613 provision.go:172] copyRemoteCerts
	I0108 20:38:51.390996   31613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:38:51.391017   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:38:51.393744   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.394121   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:51.394162   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.394258   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:38:51.394418   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:51.394590   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:38:51.394714   31613 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa Username:docker}
	I0108 20:38:51.490251   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:38:51.490333   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:38:51.516869   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:38:51.516953   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 20:38:51.544800   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:38:51.544868   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 20:38:51.568326   31613 provision.go:86] duration metric: configureAuth took 427.466025ms
	I0108 20:38:51.568358   31613 buildroot.go:189] setting minikube options for container-runtime
	I0108 20:38:51.568550   31613 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:38:51.568633   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:38:51.571246   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.571617   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:51.571647   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.571803   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:38:51.572013   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:51.572195   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:51.572330   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:38:51.572510   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:38:51.572802   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0108 20:38:51.572819   31613 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:38:51.889726   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:38:51.889761   31613 main.go:141] libmachine: Checking connection to Docker...
	I0108 20:38:51.889772   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetURL
	I0108 20:38:51.890980   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | Using libvirt version 6000000
	I0108 20:38:51.893044   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.893525   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:51.893558   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.893712   31613 main.go:141] libmachine: Docker is up and running!
	I0108 20:38:51.893729   31613 main.go:141] libmachine: Reticulating splines...
	I0108 20:38:51.893737   31613 client.go:171] LocalClient.Create took 24.398023148s
	I0108 20:38:51.893760   31613 start.go:167] duration metric: libmachine.API.Create for "multinode-340815" took 24.398080827s
	I0108 20:38:51.893773   31613 start.go:300] post-start starting for "multinode-340815-m02" (driver="kvm2")
	I0108 20:38:51.893786   31613 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:38:51.893811   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:38:51.894067   31613 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:38:51.894096   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:38:51.896431   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.896827   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:51.896860   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:51.896987   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:38:51.897166   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:51.897365   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:38:51.897510   31613 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa Username:docker}
	I0108 20:38:51.990510   31613 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:38:51.994486   31613 command_runner.go:130] > NAME=Buildroot
	I0108 20:38:51.994506   31613 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 20:38:51.994510   31613 command_runner.go:130] > ID=buildroot
	I0108 20:38:51.994519   31613 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 20:38:51.994524   31613 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 20:38:51.994774   31613 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 20:38:51.994794   31613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 20:38:51.994853   31613 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 20:38:51.994953   31613 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 20:38:51.994966   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /etc/ssl/certs/178962.pem
	I0108 20:38:51.995071   31613 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:38:52.004538   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:38:52.029117   31613 start.go:303] post-start completed in 135.328629ms
	I0108 20:38:52.029169   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetConfigRaw
	I0108 20:38:52.029797   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetIP
	I0108 20:38:52.032490   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:52.032845   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:52.032882   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:52.033122   31613 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json ...
	I0108 20:38:52.033331   31613 start.go:128] duration metric: createHost completed in 24.555335917s
	I0108 20:38:52.033359   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:38:52.035652   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:52.036017   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:52.036052   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:52.036252   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:38:52.036469   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:52.036620   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:52.036770   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:38:52.036910   31613 main.go:141] libmachine: Using SSH client type: native
	I0108 20:38:52.037232   31613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0108 20:38:52.037248   31613 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 20:38:52.165401   31613 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704746332.146900577
	
	I0108 20:38:52.165422   31613 fix.go:206] guest clock: 1704746332.146900577
	I0108 20:38:52.165430   31613 fix.go:219] Guest: 2024-01-08 20:38:52.146900577 +0000 UTC Remote: 2024-01-08 20:38:52.033344419 +0000 UTC m=+91.345800174 (delta=113.556158ms)
	I0108 20:38:52.165444   31613 fix.go:190] guest clock delta is within tolerance: 113.556158ms
	I0108 20:38:52.165449   31613 start.go:83] releasing machines lock for "multinode-340815-m02", held for 24.68754699s
	I0108 20:38:52.165471   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:38:52.165742   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetIP
	I0108 20:38:52.168383   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:52.168742   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:52.168773   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:52.171590   31613 out.go:177] * Found network options:
	I0108 20:38:52.173095   31613 out.go:177]   - NO_PROXY=192.168.39.196
	W0108 20:38:52.174582   31613 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 20:38:52.174622   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:38:52.175127   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:38:52.175295   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:38:52.175383   31613 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:38:52.175423   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	W0108 20:38:52.175479   31613 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 20:38:52.175539   31613 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:38:52.175555   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:38:52.178104   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:52.178242   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:52.178492   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:52.178519   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:52.178645   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:38:52.178669   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:52.178689   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:52.178872   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:52.178876   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:38:52.179053   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:38:52.179058   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:38:52.179220   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:38:52.179216   31613 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa Username:docker}
	I0108 20:38:52.179320   31613 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa Username:docker}
	I0108 20:38:52.432372   31613 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:38:52.432372   31613 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 20:38:52.438670   31613 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 20:38:52.438824   31613 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 20:38:52.438894   31613 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:38:52.453412   31613 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 20:38:52.453470   31613 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 20:38:52.453480   31613 start.go:475] detecting cgroup driver to use...
	I0108 20:38:52.453547   31613 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:38:52.467912   31613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:38:52.480343   31613 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:38:52.480415   31613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:38:52.493065   31613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:38:52.506028   31613 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:38:52.520069   31613 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0108 20:38:52.611610   31613 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:38:52.737841   31613 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 20:38:52.737883   31613 docker.go:233] disabling docker service ...
	I0108 20:38:52.737945   31613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:38:52.752496   31613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:38:52.765215   31613 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0108 20:38:52.765616   31613 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:38:52.780371   31613 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 20:38:52.877281   31613 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:38:52.891655   31613 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0108 20:38:52.892033   31613 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 20:38:52.992591   31613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:38:53.006252   31613 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:38:53.025208   31613 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 20:38:53.025250   31613 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:38:53.025291   31613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:38:53.034801   31613 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:38:53.034876   31613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:38:53.044431   31613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:38:53.053949   31613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:38:53.063357   31613 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:38:53.073448   31613 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:38:53.083605   31613 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 20:38:53.083645   31613 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 20:38:53.083694   31613 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 20:38:53.096412   31613 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:38:53.106605   31613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:38:53.220903   31613 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:38:53.403630   31613 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:38:53.403699   31613 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:38:53.409193   31613 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 20:38:53.409215   31613 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 20:38:53.409221   31613 command_runner.go:130] > Device: 16h/22d	Inode: 778         Links: 1
	I0108 20:38:53.409228   31613 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:38:53.409233   31613 command_runner.go:130] > Access: 2024-01-08 20:38:53.366608424 +0000
	I0108 20:38:53.409244   31613 command_runner.go:130] > Modify: 2024-01-08 20:38:53.366608424 +0000
	I0108 20:38:53.409248   31613 command_runner.go:130] > Change: 2024-01-08 20:38:53.366608424 +0000
	I0108 20:38:53.409252   31613 command_runner.go:130] >  Birth: -
	I0108 20:38:53.409280   31613 start.go:543] Will wait 60s for crictl version
	I0108 20:38:53.409329   31613 ssh_runner.go:195] Run: which crictl
	I0108 20:38:53.413114   31613 command_runner.go:130] > /usr/bin/crictl
	I0108 20:38:53.413298   31613 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:38:53.452618   31613 command_runner.go:130] > Version:  0.1.0
	I0108 20:38:53.452640   31613 command_runner.go:130] > RuntimeName:  cri-o
	I0108 20:38:53.452644   31613 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 20:38:53.452649   31613 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 20:38:53.452829   31613 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 20:38:53.452907   31613 ssh_runner.go:195] Run: crio --version
	I0108 20:38:53.499977   31613 command_runner.go:130] > crio version 1.24.1
	I0108 20:38:53.500002   31613 command_runner.go:130] > Version:          1.24.1
	I0108 20:38:53.500009   31613 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 20:38:53.500013   31613 command_runner.go:130] > GitTreeState:     dirty
	I0108 20:38:53.500019   31613 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 20:38:53.500024   31613 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 20:38:53.500028   31613 command_runner.go:130] > Compiler:         gc
	I0108 20:38:53.500032   31613 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:38:53.500037   31613 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:38:53.500043   31613 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:38:53.500047   31613 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:38:53.500052   31613 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:38:53.501378   31613 ssh_runner.go:195] Run: crio --version
	I0108 20:38:53.553325   31613 command_runner.go:130] > crio version 1.24.1
	I0108 20:38:53.553350   31613 command_runner.go:130] > Version:          1.24.1
	I0108 20:38:53.553361   31613 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 20:38:53.553368   31613 command_runner.go:130] > GitTreeState:     dirty
	I0108 20:38:53.553377   31613 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 20:38:53.553384   31613 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 20:38:53.553391   31613 command_runner.go:130] > Compiler:         gc
	I0108 20:38:53.553398   31613 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:38:53.553408   31613 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:38:53.553427   31613 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:38:53.553439   31613 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:38:53.553446   31613 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:38:53.555317   31613 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 20:38:53.557000   31613 out.go:177]   - env NO_PROXY=192.168.39.196
	I0108 20:38:53.558483   31613 main.go:141] libmachine: (multinode-340815-m02) Calling .GetIP
	I0108 20:38:53.561074   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:53.561414   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:38:53.561469   31613 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:38:53.561641   31613 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 20:38:53.565770   31613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:38:53.578797   31613 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815 for IP: 192.168.39.78
	I0108 20:38:53.578824   31613 certs.go:190] acquiring lock for shared ca certs: {Name:mke01aa9d73e320a9a3907677cf29c75f0fa86d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:38:53.578988   31613 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key
	I0108 20:38:53.579045   31613 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key
	I0108 20:38:53.579063   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:38:53.579085   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:38:53.579102   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:38:53.579118   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:38:53.579175   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem (1338 bytes)
	W0108 20:38:53.579212   31613 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896_empty.pem, impossibly tiny 0 bytes
	I0108 20:38:53.579240   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:38:53.579277   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem (1082 bytes)
	I0108 20:38:53.579313   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:38:53.579348   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem (1675 bytes)
	I0108 20:38:53.579405   31613 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:38:53.579434   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /usr/share/ca-certificates/178962.pem
	I0108 20:38:53.579451   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:38:53.579468   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem -> /usr/share/ca-certificates/17896.pem
	I0108 20:38:53.579876   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:38:53.602699   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:38:53.626812   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:38:53.651129   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:38:53.674805   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /usr/share/ca-certificates/178962.pem (1708 bytes)
	I0108 20:38:53.698512   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:38:53.722079   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem --> /usr/share/ca-certificates/17896.pem (1338 bytes)
	I0108 20:38:53.748703   31613 ssh_runner.go:195] Run: openssl version
	I0108 20:38:53.754026   31613 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 20:38:53.754391   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178962.pem && ln -fs /usr/share/ca-certificates/178962.pem /etc/ssl/certs/178962.pem"
	I0108 20:38:53.764668   31613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178962.pem
	I0108 20:38:53.769269   31613 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 20:38:53.769428   31613 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 20:38:53.769505   31613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178962.pem
	I0108 20:38:53.775313   31613 command_runner.go:130] > 3ec20f2e
	I0108 20:38:53.775387   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178962.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:38:53.785875   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:38:53.796912   31613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:38:53.801881   31613 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:38:53.802123   31613 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:38:53.802200   31613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:38:53.807705   31613 command_runner.go:130] > b5213941
	I0108 20:38:53.807917   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:38:53.818279   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17896.pem && ln -fs /usr/share/ca-certificates/17896.pem /etc/ssl/certs/17896.pem"
	I0108 20:38:53.828276   31613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17896.pem
	I0108 20:38:53.832712   31613 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 20:38:53.833148   31613 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 20:38:53.833206   31613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17896.pem
	I0108 20:38:53.838753   31613 command_runner.go:130] > 51391683
	I0108 20:38:53.839183   31613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17896.pem /etc/ssl/certs/51391683.0"
	I0108 20:38:53.849692   31613 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:38:53.854071   31613 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:38:53.854106   31613 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:38:53.854183   31613 ssh_runner.go:195] Run: crio config
	I0108 20:38:53.912800   31613 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 20:38:53.912823   31613 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 20:38:53.912830   31613 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 20:38:53.912834   31613 command_runner.go:130] > #
	I0108 20:38:53.912840   31613 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 20:38:53.912847   31613 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 20:38:53.912853   31613 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 20:38:53.912860   31613 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 20:38:53.912865   31613 command_runner.go:130] > # reload'.
	I0108 20:38:53.912871   31613 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 20:38:53.912878   31613 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 20:38:53.912884   31613 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 20:38:53.912890   31613 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 20:38:53.912896   31613 command_runner.go:130] > [crio]
	I0108 20:38:53.912902   31613 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 20:38:53.912913   31613 command_runner.go:130] > # containers images, in this directory.
	I0108 20:38:53.912921   31613 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 20:38:53.912939   31613 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 20:38:53.912949   31613 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 20:38:53.912959   31613 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 20:38:53.912965   31613 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 20:38:53.912970   31613 command_runner.go:130] > storage_driver = "overlay"
	I0108 20:38:53.912975   31613 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 20:38:53.912981   31613 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 20:38:53.912985   31613 command_runner.go:130] > storage_option = [
	I0108 20:38:53.912991   31613 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 20:38:53.912998   31613 command_runner.go:130] > ]
	I0108 20:38:53.913005   31613 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 20:38:53.913013   31613 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 20:38:53.913018   31613 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 20:38:53.913027   31613 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 20:38:53.913032   31613 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 20:38:53.913039   31613 command_runner.go:130] > # always happen on a node reboot
	I0108 20:38:53.913044   31613 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 20:38:53.913052   31613 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 20:38:53.913066   31613 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 20:38:53.913081   31613 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 20:38:53.913092   31613 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 20:38:53.913106   31613 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 20:38:53.913122   31613 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 20:38:53.913133   31613 command_runner.go:130] > # internal_wipe = true
	I0108 20:38:53.913146   31613 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 20:38:53.913157   31613 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 20:38:53.913163   31613 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 20:38:53.913171   31613 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 20:38:53.913177   31613 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 20:38:53.913183   31613 command_runner.go:130] > [crio.api]
	I0108 20:38:53.913188   31613 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 20:38:53.913195   31613 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 20:38:53.913200   31613 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 20:38:53.913206   31613 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 20:38:53.913212   31613 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 20:38:53.913217   31613 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 20:38:53.913223   31613 command_runner.go:130] > # stream_port = "0"
	I0108 20:38:53.913228   31613 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 20:38:53.913235   31613 command_runner.go:130] > # stream_enable_tls = false
	I0108 20:38:53.913241   31613 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 20:38:53.913249   31613 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 20:38:53.913260   31613 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 20:38:53.913274   31613 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 20:38:53.913284   31613 command_runner.go:130] > # minutes.
	I0108 20:38:53.913291   31613 command_runner.go:130] > # stream_tls_cert = ""
	I0108 20:38:53.913305   31613 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 20:38:53.913318   31613 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 20:38:53.913324   31613 command_runner.go:130] > # stream_tls_key = ""
	I0108 20:38:53.913332   31613 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 20:38:53.913342   31613 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 20:38:53.913354   31613 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 20:38:53.913364   31613 command_runner.go:130] > # stream_tls_ca = ""
	I0108 20:38:53.913379   31613 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:38:53.913390   31613 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 20:38:53.913406   31613 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:38:53.913416   31613 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 20:38:53.913428   31613 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 20:38:53.913435   31613 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 20:38:53.913440   31613 command_runner.go:130] > [crio.runtime]
	I0108 20:38:53.913452   31613 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 20:38:53.913458   31613 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 20:38:53.913463   31613 command_runner.go:130] > # "nofile=1024:2048"
	I0108 20:38:53.913469   31613 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 20:38:53.913476   31613 command_runner.go:130] > # default_ulimits = [
	I0108 20:38:53.913479   31613 command_runner.go:130] > # ]
	I0108 20:38:53.913487   31613 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 20:38:53.913495   31613 command_runner.go:130] > # no_pivot = false
	I0108 20:38:53.913500   31613 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 20:38:53.913509   31613 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 20:38:53.913514   31613 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 20:38:53.913519   31613 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 20:38:53.913525   31613 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 20:38:53.913531   31613 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:38:53.913538   31613 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 20:38:53.913543   31613 command_runner.go:130] > # Cgroup setting for conmon
	I0108 20:38:53.913551   31613 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 20:38:53.913559   31613 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 20:38:53.913565   31613 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 20:38:53.913572   31613 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 20:38:53.913578   31613 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:38:53.913582   31613 command_runner.go:130] > conmon_env = [
	I0108 20:38:53.913594   31613 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 20:38:53.913597   31613 command_runner.go:130] > ]
	I0108 20:38:53.913602   31613 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 20:38:53.913607   31613 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 20:38:53.913613   31613 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 20:38:53.913623   31613 command_runner.go:130] > # default_env = [
	I0108 20:38:53.913628   31613 command_runner.go:130] > # ]
	I0108 20:38:53.913638   31613 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 20:38:53.913648   31613 command_runner.go:130] > # selinux = false
	I0108 20:38:53.913658   31613 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 20:38:53.913667   31613 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 20:38:53.913673   31613 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 20:38:53.913680   31613 command_runner.go:130] > # seccomp_profile = ""
	I0108 20:38:53.913686   31613 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 20:38:53.913694   31613 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 20:38:53.913700   31613 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 20:38:53.913706   31613 command_runner.go:130] > # which might increase security.
	I0108 20:38:53.913711   31613 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 20:38:53.913718   31613 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 20:38:53.913726   31613 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 20:38:53.913733   31613 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 20:38:53.913741   31613 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 20:38:53.913746   31613 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:38:53.913753   31613 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 20:38:53.913758   31613 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 20:38:53.913762   31613 command_runner.go:130] > # the cgroup blockio controller.
	I0108 20:38:53.913769   31613 command_runner.go:130] > # blockio_config_file = ""
	I0108 20:38:53.913776   31613 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 20:38:53.913782   31613 command_runner.go:130] > # irqbalance daemon.
	I0108 20:38:53.913787   31613 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 20:38:53.913795   31613 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 20:38:53.913800   31613 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:38:53.913806   31613 command_runner.go:130] > # rdt_config_file = ""
	I0108 20:38:53.913812   31613 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 20:38:53.913819   31613 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 20:38:53.913828   31613 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 20:38:53.913838   31613 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 20:38:53.913849   31613 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 20:38:53.913863   31613 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 20:38:53.913872   31613 command_runner.go:130] > # will be added.
	I0108 20:38:53.913877   31613 command_runner.go:130] > # default_capabilities = [
	I0108 20:38:53.913884   31613 command_runner.go:130] > # 	"CHOWN",
	I0108 20:38:53.913888   31613 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 20:38:53.913896   31613 command_runner.go:130] > # 	"FSETID",
	I0108 20:38:53.913900   31613 command_runner.go:130] > # 	"FOWNER",
	I0108 20:38:53.913905   31613 command_runner.go:130] > # 	"SETGID",
	I0108 20:38:53.913910   31613 command_runner.go:130] > # 	"SETUID",
	I0108 20:38:53.913914   31613 command_runner.go:130] > # 	"SETPCAP",
	I0108 20:38:53.913918   31613 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 20:38:53.913923   31613 command_runner.go:130] > # 	"KILL",
	I0108 20:38:53.913927   31613 command_runner.go:130] > # ]
	I0108 20:38:53.913935   31613 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 20:38:53.913942   31613 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:38:53.913948   31613 command_runner.go:130] > # default_sysctls = [
	I0108 20:38:53.913951   31613 command_runner.go:130] > # ]
	I0108 20:38:53.913956   31613 command_runner.go:130] > # List of devices on the host that a
	I0108 20:38:53.913965   31613 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 20:38:53.913969   31613 command_runner.go:130] > # allowed_devices = [
	I0108 20:38:53.913974   31613 command_runner.go:130] > # 	"/dev/fuse",
	I0108 20:38:53.913977   31613 command_runner.go:130] > # ]
	I0108 20:38:53.913984   31613 command_runner.go:130] > # List of additional devices. specified as
	I0108 20:38:53.913991   31613 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 20:38:53.913998   31613 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 20:38:53.914012   31613 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:38:53.914019   31613 command_runner.go:130] > # additional_devices = [
	I0108 20:38:53.914022   31613 command_runner.go:130] > # ]
	I0108 20:38:53.914028   31613 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 20:38:53.914034   31613 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 20:38:53.914038   31613 command_runner.go:130] > # 	"/etc/cdi",
	I0108 20:38:53.914042   31613 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 20:38:53.914048   31613 command_runner.go:130] > # ]
	I0108 20:38:53.914054   31613 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 20:38:53.914061   31613 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 20:38:53.914066   31613 command_runner.go:130] > # Defaults to false.
	I0108 20:38:53.914073   31613 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 20:38:53.914079   31613 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 20:38:53.914087   31613 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 20:38:53.914091   31613 command_runner.go:130] > # hooks_dir = [
	I0108 20:38:53.914097   31613 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 20:38:53.914100   31613 command_runner.go:130] > # ]
	I0108 20:38:53.914109   31613 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 20:38:53.914119   31613 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 20:38:53.914130   31613 command_runner.go:130] > # its default mounts from the following two files:
	I0108 20:38:53.914135   31613 command_runner.go:130] > #
	I0108 20:38:53.914144   31613 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 20:38:53.914158   31613 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 20:38:53.914171   31613 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 20:38:53.914180   31613 command_runner.go:130] > #
	I0108 20:38:53.914187   31613 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 20:38:53.914201   31613 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 20:38:53.914211   31613 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 20:38:53.914219   31613 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 20:38:53.914225   31613 command_runner.go:130] > #
	I0108 20:38:53.914237   31613 command_runner.go:130] > # default_mounts_file = ""
	I0108 20:38:53.914250   31613 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 20:38:53.914264   31613 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 20:38:53.914275   31613 command_runner.go:130] > pids_limit = 1024
	I0108 20:38:53.914287   31613 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 20:38:53.914302   31613 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 20:38:53.914316   31613 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 20:38:53.914331   31613 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 20:38:53.914342   31613 command_runner.go:130] > # log_size_max = -1
	I0108 20:38:53.914357   31613 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 20:38:53.914368   31613 command_runner.go:130] > # log_to_journald = false
	I0108 20:38:53.914382   31613 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 20:38:53.914394   31613 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 20:38:53.914404   31613 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 20:38:53.914416   31613 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 20:38:53.914429   31613 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 20:38:53.914440   31613 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 20:38:53.914456   31613 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 20:38:53.914466   31613 command_runner.go:130] > # read_only = false
	I0108 20:38:53.914481   31613 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 20:38:53.914495   31613 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 20:38:53.914508   31613 command_runner.go:130] > # live configuration reload.
	I0108 20:38:53.914519   31613 command_runner.go:130] > # log_level = "info"
	I0108 20:38:53.914532   31613 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 20:38:53.914545   31613 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:38:53.914556   31613 command_runner.go:130] > # log_filter = ""
	I0108 20:38:53.914568   31613 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 20:38:53.914582   31613 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 20:38:53.914592   31613 command_runner.go:130] > # separated by comma.
	I0108 20:38:53.914603   31613 command_runner.go:130] > # uid_mappings = ""
	I0108 20:38:53.914617   31613 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 20:38:53.914631   31613 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 20:38:53.914641   31613 command_runner.go:130] > # separated by comma.
	I0108 20:38:53.914649   31613 command_runner.go:130] > # gid_mappings = ""
	I0108 20:38:53.914663   31613 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 20:38:53.914677   31613 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:38:53.914690   31613 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:38:53.914702   31613 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 20:38:53.914713   31613 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 20:38:53.914727   31613 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:38:53.914741   31613 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:38:53.914752   31613 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 20:38:53.914765   31613 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 20:38:53.914779   31613 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 20:38:53.914792   31613 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 20:38:53.914803   31613 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 20:38:53.914816   31613 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 20:38:53.914827   31613 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 20:38:53.914839   31613 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 20:38:53.914850   31613 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 20:38:53.914859   31613 command_runner.go:130] > drop_infra_ctr = false
	I0108 20:38:53.914873   31613 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 20:38:53.914886   31613 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 20:38:53.914902   31613 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 20:38:53.914912   31613 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 20:38:53.914923   31613 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 20:38:53.914936   31613 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 20:38:53.914946   31613 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 20:38:53.914959   31613 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 20:38:53.914970   31613 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 20:38:53.914983   31613 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 20:38:53.914998   31613 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 20:38:53.915011   31613 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 20:38:53.915022   31613 command_runner.go:130] > # default_runtime = "runc"
	I0108 20:38:53.915032   31613 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 20:38:53.915047   31613 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 20:38:53.915066   31613 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 20:38:53.915078   31613 command_runner.go:130] > # creation as a file is not desired either.
	I0108 20:38:53.915094   31613 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 20:38:53.915107   31613 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 20:38:53.915117   31613 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 20:38:53.915124   31613 command_runner.go:130] > # ]
	I0108 20:38:53.915139   31613 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 20:38:53.915153   31613 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 20:38:53.915167   31613 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 20:38:53.915181   31613 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 20:38:53.915189   31613 command_runner.go:130] > #
	I0108 20:38:53.915198   31613 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 20:38:53.915211   31613 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 20:38:53.915222   31613 command_runner.go:130] > #  runtime_type = "oci"
	I0108 20:38:53.915232   31613 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 20:38:53.915243   31613 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 20:38:53.915254   31613 command_runner.go:130] > #  allowed_annotations = []
	I0108 20:38:53.915264   31613 command_runner.go:130] > # Where:
	I0108 20:38:53.915274   31613 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 20:38:53.915288   31613 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 20:38:53.915302   31613 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 20:38:53.915316   31613 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 20:38:53.915325   31613 command_runner.go:130] > #   in $PATH.
	I0108 20:38:53.915336   31613 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 20:38:53.915348   31613 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 20:38:53.915362   31613 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 20:38:53.915371   31613 command_runner.go:130] > #   state.
	I0108 20:38:53.915384   31613 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 20:38:53.915401   31613 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 20:38:53.915415   31613 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 20:38:53.915431   31613 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 20:38:53.915449   31613 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 20:38:53.915464   31613 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 20:38:53.915476   31613 command_runner.go:130] > #   The currently recognized values are:
	I0108 20:38:53.915492   31613 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 20:38:53.915507   31613 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 20:38:53.915520   31613 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 20:38:53.915532   31613 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 20:38:53.915548   31613 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 20:38:53.915562   31613 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 20:38:53.915576   31613 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 20:38:53.915590   31613 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 20:38:53.915602   31613 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 20:38:53.915610   31613 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 20:38:53.915621   31613 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 20:38:53.915630   31613 command_runner.go:130] > runtime_type = "oci"
	I0108 20:38:53.915640   31613 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 20:38:53.915649   31613 command_runner.go:130] > runtime_config_path = ""
	I0108 20:38:53.915659   31613 command_runner.go:130] > monitor_path = ""
	I0108 20:38:53.915670   31613 command_runner.go:130] > monitor_cgroup = ""
	I0108 20:38:53.915681   31613 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 20:38:53.915695   31613 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 20:38:53.915706   31613 command_runner.go:130] > # running containers
	I0108 20:38:53.915717   31613 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 20:38:53.915731   31613 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 20:38:53.915761   31613 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 20:38:53.915774   31613 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 20:38:53.915783   31613 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 20:38:53.915793   31613 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 20:38:53.915805   31613 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 20:38:53.915814   31613 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 20:38:53.915825   31613 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 20:38:53.915837   31613 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 20:38:53.915853   31613 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 20:38:53.915866   31613 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 20:38:53.915879   31613 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 20:38:53.915896   31613 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 20:38:53.915913   31613 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 20:38:53.915926   31613 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 20:38:53.915945   31613 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 20:38:53.915959   31613 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 20:38:53.915973   31613 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 20:38:53.915988   31613 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 20:38:53.915997   31613 command_runner.go:130] > # Example:
	I0108 20:38:53.916008   31613 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 20:38:53.916020   31613 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 20:38:53.916032   31613 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 20:38:53.916041   31613 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 20:38:53.916051   31613 command_runner.go:130] > # cpuset = 0
	I0108 20:38:53.916060   31613 command_runner.go:130] > # cpushares = "0-1"
	I0108 20:38:53.916069   31613 command_runner.go:130] > # Where:
	I0108 20:38:53.916080   31613 command_runner.go:130] > # The workload name is workload-type.
	I0108 20:38:53.916111   31613 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 20:38:53.916125   31613 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 20:38:53.916136   31613 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 20:38:53.916152   31613 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 20:38:53.916166   31613 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 20:38:53.916175   31613 command_runner.go:130] > # 
	I0108 20:38:53.916188   31613 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 20:38:53.916197   31613 command_runner.go:130] > #
	I0108 20:38:53.916208   31613 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 20:38:53.916226   31613 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 20:38:53.916240   31613 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 20:38:53.916254   31613 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 20:38:53.916267   31613 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 20:38:53.916277   31613 command_runner.go:130] > [crio.image]
	I0108 20:38:53.916290   31613 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 20:38:53.916298   31613 command_runner.go:130] > # default_transport = "docker://"
	I0108 20:38:53.916313   31613 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 20:38:53.916331   31613 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:38:53.916341   31613 command_runner.go:130] > # global_auth_file = ""
	I0108 20:38:53.916352   31613 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 20:38:53.916370   31613 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:38:53.916382   31613 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 20:38:53.916396   31613 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 20:38:53.916407   31613 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:38:53.916420   31613 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:38:53.916431   31613 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 20:38:53.916449   31613 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 20:38:53.916464   31613 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 20:38:53.916477   31613 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 20:38:53.916491   31613 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 20:38:53.916501   31613 command_runner.go:130] > # pause_command = "/pause"
	I0108 20:38:53.916513   31613 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 20:38:53.916527   31613 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 20:38:53.916542   31613 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 20:38:53.916555   31613 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 20:38:53.916568   31613 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 20:38:53.916578   31613 command_runner.go:130] > # signature_policy = ""
	I0108 20:38:53.916592   31613 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 20:38:53.916606   31613 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 20:38:53.916617   31613 command_runner.go:130] > # changing them here.
	I0108 20:38:53.916625   31613 command_runner.go:130] > # insecure_registries = [
	I0108 20:38:53.916634   31613 command_runner.go:130] > # ]
	I0108 20:38:53.916645   31613 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 20:38:53.916657   31613 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 20:38:53.916668   31613 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 20:38:53.916680   31613 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 20:38:53.916692   31613 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 20:38:53.916706   31613 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 20:38:53.916716   31613 command_runner.go:130] > # CNI plugins.
	I0108 20:38:53.916726   31613 command_runner.go:130] > [crio.network]
	I0108 20:38:53.916737   31613 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 20:38:53.916750   31613 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 20:38:53.916761   31613 command_runner.go:130] > # cni_default_network = ""
	I0108 20:38:53.916771   31613 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 20:38:53.916782   31613 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 20:38:53.916792   31613 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 20:38:53.916804   31613 command_runner.go:130] > # plugin_dirs = [
	I0108 20:38:53.916812   31613 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 20:38:53.916821   31613 command_runner.go:130] > # ]
	I0108 20:38:53.916832   31613 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 20:38:53.916841   31613 command_runner.go:130] > [crio.metrics]
	I0108 20:38:53.916853   31613 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 20:38:53.916861   31613 command_runner.go:130] > enable_metrics = true
	I0108 20:38:53.916873   31613 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 20:38:53.916885   31613 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 20:38:53.916896   31613 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 20:38:53.916910   31613 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 20:38:53.916923   31613 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 20:38:53.916932   31613 command_runner.go:130] > # metrics_collectors = [
	I0108 20:38:53.916942   31613 command_runner.go:130] > # 	"operations",
	I0108 20:38:53.916952   31613 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 20:38:53.916964   31613 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 20:38:53.916975   31613 command_runner.go:130] > # 	"operations_errors",
	I0108 20:38:53.916986   31613 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 20:38:53.916995   31613 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 20:38:53.917006   31613 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 20:38:53.917016   31613 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 20:38:53.917024   31613 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 20:38:53.917034   31613 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 20:38:53.917043   31613 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 20:38:53.917054   31613 command_runner.go:130] > # 	"containers_oom_total",
	I0108 20:38:53.917062   31613 command_runner.go:130] > # 	"containers_oom",
	I0108 20:38:53.917072   31613 command_runner.go:130] > # 	"processes_defunct",
	I0108 20:38:53.917083   31613 command_runner.go:130] > # 	"operations_total",
	I0108 20:38:53.917092   31613 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 20:38:53.917104   31613 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 20:38:53.917115   31613 command_runner.go:130] > # 	"operations_errors_total",
	I0108 20:38:53.917123   31613 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 20:38:53.917135   31613 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 20:38:53.917146   31613 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 20:38:53.917157   31613 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 20:38:53.917168   31613 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 20:38:53.917180   31613 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 20:38:53.917189   31613 command_runner.go:130] > # ]
	I0108 20:38:53.917200   31613 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 20:38:53.917211   31613 command_runner.go:130] > # metrics_port = 9090
	I0108 20:38:53.917221   31613 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 20:38:53.917230   31613 command_runner.go:130] > # metrics_socket = ""
	I0108 20:38:53.917240   31613 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 20:38:53.917253   31613 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 20:38:53.917267   31613 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 20:38:53.917279   31613 command_runner.go:130] > # certificate on any modification event.
	I0108 20:38:53.917289   31613 command_runner.go:130] > # metrics_cert = ""
	I0108 20:38:53.917300   31613 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 20:38:53.917311   31613 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 20:38:53.917322   31613 command_runner.go:130] > # metrics_key = ""
	I0108 20:38:53.917335   31613 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 20:38:53.917345   31613 command_runner.go:130] > [crio.tracing]
	I0108 20:38:53.917357   31613 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 20:38:53.917367   31613 command_runner.go:130] > # enable_tracing = false
	I0108 20:38:53.917380   31613 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 20:38:53.917391   31613 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 20:38:53.917404   31613 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 20:38:53.917415   31613 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 20:38:53.917429   31613 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 20:38:53.917438   31613 command_runner.go:130] > [crio.stats]
	I0108 20:38:53.917454   31613 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 20:38:53.917467   31613 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 20:38:53.917478   31613 command_runner.go:130] > # stats_collection_period = 0
	I0108 20:38:53.917514   31613 command_runner.go:130] ! time="2024-01-08 20:38:53.894572081Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 20:38:53.917533   31613 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 20:38:53.917599   31613 cni.go:84] Creating CNI manager for ""
	I0108 20:38:53.917609   31613 cni.go:136] 2 nodes found, recommending kindnet
	I0108 20:38:53.917620   31613 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:38:53.917646   31613 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-340815 NodeName:multinode-340815-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:38:53.917784   31613 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-340815-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:38:53.917849   31613 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-340815-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:38:53.917915   31613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:38:53.928272   31613 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0108 20:38:53.928434   31613 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0108 20:38:53.928495   31613 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0108 20:38:53.937438   31613 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0108 20:38:53.937462   31613 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0108 20:38:53.937467   31613 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0108 20:38:53.937482   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0108 20:38:53.937553   31613 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0108 20:38:53.944975   31613 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0108 20:38:53.945014   31613 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0108 20:38:53.945033   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0108 20:39:19.703260   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0108 20:39:19.703333   31613 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0108 20:39:19.708360   31613 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0108 20:39:19.708569   31613 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0108 20:39:19.708607   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0108 20:39:53.971213   31613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:39:53.986901   31613 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0108 20:39:53.986985   31613 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0108 20:39:53.991563   31613 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0108 20:39:53.991970   31613 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0108 20:39:53.991997   31613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0108 20:39:54.529667   31613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 20:39:54.538740   31613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0108 20:39:54.556769   31613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:39:54.574436   31613 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0108 20:39:54.578570   31613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:39:54.591315   31613 host.go:66] Checking if "multinode-340815" exists ...
	I0108 20:39:54.591569   31613 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:39:54.591681   31613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:39:54.591715   31613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:39:54.605581   31613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36271
	I0108 20:39:54.606044   31613 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:39:54.606547   31613 main.go:141] libmachine: Using API Version  1
	I0108 20:39:54.606567   31613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:39:54.606884   31613 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:39:54.607058   31613 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:39:54.607200   31613 start.go:304] JoinCluster: &{Name:multinode-340815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:39:54.607280   31613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 20:39:54.607295   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:39:54.610309   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:39:54.610758   31613 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:39:54.610785   31613 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:39:54.610967   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:39:54.611110   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:39:54.611264   31613 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:39:54.611422   31613 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:39:54.791816   31613 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 5cxfgw.m4gvepusvmjdeind --discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 
	I0108 20:39:54.791886   31613 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:39:54.791923   31613 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5cxfgw.m4gvepusvmjdeind --discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-340815-m02"
	I0108 20:39:54.843255   31613 command_runner.go:130] ! W0108 20:39:54.840866     819 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 20:39:54.966267   31613 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 20:39:57.670142   31613 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 20:39:57.670175   31613 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 20:39:57.670189   31613 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 20:39:57.670200   31613 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:39:57.670210   31613 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:39:57.670218   31613 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 20:39:57.670228   31613 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 20:39:57.670238   31613 command_runner.go:130] > This node has joined the cluster:
	I0108 20:39:57.670248   31613 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 20:39:57.670258   31613 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 20:39:57.670267   31613 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 20:39:57.670295   31613 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5cxfgw.m4gvepusvmjdeind --discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-340815-m02": (2.878353395s)
	I0108 20:39:57.670319   31613 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 20:39:57.935952   31613 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0108 20:39:57.936067   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-340815 minikube.k8s.io/updated_at=2024_01_08T20_39_57_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:39:58.062033   31613 command_runner.go:130] > node/multinode-340815-m02 labeled
	I0108 20:39:58.064416   31613 start.go:306] JoinCluster complete in 3.4572127s
	I0108 20:39:58.064437   31613 cni.go:84] Creating CNI manager for ""
	I0108 20:39:58.064442   31613 cni.go:136] 2 nodes found, recommending kindnet
	I0108 20:39:58.064488   31613 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:39:58.072483   31613 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 20:39:58.072510   31613 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 20:39:58.072517   31613 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 20:39:58.072523   31613 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:39:58.072529   31613 command_runner.go:130] > Access: 2024-01-08 20:37:34.195702624 +0000
	I0108 20:39:58.072534   31613 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 20:39:58.072538   31613 command_runner.go:130] > Change: 2024-01-08 20:37:32.351702624 +0000
	I0108 20:39:58.072542   31613 command_runner.go:130] >  Birth: -
	I0108 20:39:58.072699   31613 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:39:58.072722   31613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:39:58.094467   31613 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:39:58.390953   31613 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:39:58.390988   31613 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:39:58.390998   31613 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 20:39:58.391007   31613 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 20:39:58.391677   31613 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:39:58.392018   31613 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:39:58.392493   31613 round_trippers.go:463] GET https://192.168.39.196:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:39:58.392512   31613 round_trippers.go:469] Request Headers:
	I0108 20:39:58.392525   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:39:58.392535   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:39:58.403382   31613 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0108 20:39:58.403407   31613 round_trippers.go:577] Response Headers:
	I0108 20:39:58.403414   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:39:58.403423   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:39:58.403428   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:39:58.403436   31613 round_trippers.go:580]     Content-Length: 291
	I0108 20:39:58.403444   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:39:58 GMT
	I0108 20:39:58.403454   31613 round_trippers.go:580]     Audit-Id: e0b5226b-f3df-42fe-921a-cff38c9d5577
	I0108 20:39:58.403465   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:39:58.403495   31613 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a90ea09-afeb-4dda-ab10-18a22e37ea78","resourceVersion":"412","creationTimestamp":"2024-01-08T20:38:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 20:39:58.403598   31613 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-340815" context rescaled to 1 replicas
	I0108 20:39:58.403627   31613 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:39:58.405950   31613 out.go:177] * Verifying Kubernetes components...
	I0108 20:39:58.407813   31613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:39:58.426720   31613 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:39:58.427055   31613 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:39:58.427371   31613 node_ready.go:35] waiting up to 6m0s for node "multinode-340815-m02" to be "Ready" ...
	I0108 20:39:58.427460   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:39:58.427470   31613 round_trippers.go:469] Request Headers:
	I0108 20:39:58.427482   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:39:58.427490   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:39:58.430403   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:39:58.430433   31613 round_trippers.go:577] Response Headers:
	I0108 20:39:58.430443   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:39:58 GMT
	I0108 20:39:58.430453   31613 round_trippers.go:580]     Audit-Id: 61a51124-2235-49ce-acc2-a41045c3f2c9
	I0108 20:39:58.430462   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:39:58.430472   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:39:58.430480   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:39:58.430492   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:39:58.430624   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:39:58.928362   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:39:58.928398   31613 round_trippers.go:469] Request Headers:
	I0108 20:39:58.928410   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:39:58.928421   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:39:58.931531   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:39:58.931554   31613 round_trippers.go:577] Response Headers:
	I0108 20:39:58.931562   31613 round_trippers.go:580]     Audit-Id: e68e4a4c-223f-47cc-9ccf-01371b68c6ed
	I0108 20:39:58.931568   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:39:58.931573   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:39:58.931578   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:39:58.931588   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:39:58.931593   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:39:58 GMT
	I0108 20:39:58.931677   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:39:59.428414   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:39:59.428440   31613 round_trippers.go:469] Request Headers:
	I0108 20:39:59.428451   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:39:59.428459   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:39:59.431333   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:39:59.431359   31613 round_trippers.go:577] Response Headers:
	I0108 20:39:59.431371   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:39:59.431377   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:39:59.431382   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:39:59.431387   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:39:59 GMT
	I0108 20:39:59.431393   31613 round_trippers.go:580]     Audit-Id: d6f1b2bf-72e8-4320-b005-c3a6c196bbab
	I0108 20:39:59.431401   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:39:59.431553   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:39:59.927695   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:39:59.927721   31613 round_trippers.go:469] Request Headers:
	I0108 20:39:59.927747   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:39:59.927757   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:39:59.932983   31613 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 20:39:59.933013   31613 round_trippers.go:577] Response Headers:
	I0108 20:39:59.933023   31613 round_trippers.go:580]     Audit-Id: 3a18f3e0-fcdd-45ea-b584-72e78405e8ff
	I0108 20:39:59.933031   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:39:59.933039   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:39:59.933048   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:39:59.933056   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:39:59.933063   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:39:59 GMT
	I0108 20:39:59.933485   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:00.428228   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:00.428256   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:00.428264   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:00.428271   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:00.431526   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:40:00.431553   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:00.431568   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:00.431578   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:00.431587   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:00.431597   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:00.431606   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:00 GMT
	I0108 20:40:00.431615   31613 round_trippers.go:580]     Audit-Id: bec4846e-c95e-4aee-849a-64fcf013a557
	I0108 20:40:00.431772   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:00.432123   31613 node_ready.go:58] node "multinode-340815-m02" has status "Ready":"False"
	I0108 20:40:00.928358   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:00.928380   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:00.928391   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:00.928399   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:00.934499   31613 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 20:40:00.934524   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:00.934533   31613 round_trippers.go:580]     Audit-Id: e5d17ffd-8ef5-4869-be3c-200055e1d009
	I0108 20:40:00.934541   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:00.934548   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:00.934557   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:00.934569   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:00.934581   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:00 GMT
	I0108 20:40:00.934827   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:01.428475   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:01.428500   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:01.428511   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:01.428535   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:01.431922   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:40:01.431941   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:01.431948   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:01 GMT
	I0108 20:40:01.431954   31613 round_trippers.go:580]     Audit-Id: a1564d62-d2c4-48ed-863a-ec1ab841bb0e
	I0108 20:40:01.431959   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:01.431963   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:01.431968   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:01.431974   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:01.432159   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:01.927629   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:01.927654   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:01.927663   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:01.927669   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:01.930438   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:01.930473   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:01.930480   31613 round_trippers.go:580]     Audit-Id: 7b794c7b-ac7b-4fa9-86c0-cd56af7e443a
	I0108 20:40:01.930486   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:01.930490   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:01.930496   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:01.930511   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:01.930516   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:01 GMT
	I0108 20:40:01.930916   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:02.427592   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:02.427619   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:02.427629   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:02.427651   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:02.430571   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:02.430596   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:02.430603   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:02.430609   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:02.430614   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:02 GMT
	I0108 20:40:02.430619   31613 round_trippers.go:580]     Audit-Id: a5ea9189-3468-49de-9d94-87d185970efc
	I0108 20:40:02.430624   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:02.430629   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:02.430815   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:02.927970   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:02.928006   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:02.928018   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:02.928027   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:02.930868   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:02.930890   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:02.930898   31613 round_trippers.go:580]     Audit-Id: 2ff2e4fb-ff49-42c8-a4b9-33ba431b4def
	I0108 20:40:02.930909   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:02.930918   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:02.930927   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:02.930934   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:02.930942   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:02 GMT
	I0108 20:40:02.931174   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:02.931521   31613 node_ready.go:58] node "multinode-340815-m02" has status "Ready":"False"
	I0108 20:40:03.427825   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:03.427849   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:03.427860   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:03.427868   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:03.430499   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:03.430524   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:03.430532   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:03.430538   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:03.430543   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:03.430548   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:03.430560   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:03 GMT
	I0108 20:40:03.430568   31613 round_trippers.go:580]     Audit-Id: 67763f88-d90f-429d-8fd9-6951e8452f99
	I0108 20:40:03.430669   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:03.928441   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:03.928488   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:03.928499   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:03.928509   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:03.931687   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:40:03.931716   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:03.931727   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:03.931736   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:03.931744   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:03.931753   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:03 GMT
	I0108 20:40:03.931762   31613 round_trippers.go:580]     Audit-Id: a9ddd153-559b-4476-b5f7-ce9e344377e2
	I0108 20:40:03.931769   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:03.932415   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:04.428016   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:04.428047   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:04.428059   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:04.428067   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:04.430870   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:04.430897   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:04.430909   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:04.430917   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:04.430925   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:04 GMT
	I0108 20:40:04.430933   31613 round_trippers.go:580]     Audit-Id: 63f1c576-7d2f-4f18-98a4-57cf8c455a3c
	I0108 20:40:04.430940   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:04.430948   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:04.431238   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:04.927585   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:04.927610   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:04.927622   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:04.927630   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:04.930986   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:40:04.931010   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:04.931019   31613 round_trippers.go:580]     Audit-Id: 68b3dd47-ab55-4737-aee9-9804a8b5cc8a
	I0108 20:40:04.931026   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:04.931033   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:04.931042   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:04.931049   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:04.931057   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:04 GMT
	I0108 20:40:04.931389   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:04.931741   31613 node_ready.go:58] node "multinode-340815-m02" has status "Ready":"False"
	I0108 20:40:05.428230   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:05.428259   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:05.428268   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:05.428277   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:05.431364   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:40:05.431392   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:05.431403   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:05.431411   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:05.431419   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:05.431428   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:05.431436   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:05 GMT
	I0108 20:40:05.431444   31613 round_trippers.go:580]     Audit-Id: 7888453c-1dfe-430d-8431-7e75574de371
	I0108 20:40:05.431726   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:05.928208   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:05.928235   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:05.928243   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:05.928249   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:05.933790   31613 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 20:40:05.933815   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:05.933823   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:05.933829   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:05.933835   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:05.933843   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:05.933851   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:05 GMT
	I0108 20:40:05.933872   31613 round_trippers.go:580]     Audit-Id: 5fbc6f4a-a044-4778-bfaf-cfc7e64e8c82
	I0108 20:40:05.934261   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:06.427933   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:06.427967   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:06.427979   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:06.427987   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:06.430670   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:06.430698   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:06.430708   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:06.430716   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:06.430724   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:06.430732   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:06 GMT
	I0108 20:40:06.430740   31613 round_trippers.go:580]     Audit-Id: 22e82c56-2f34-4bd1-bcd5-7ae9886ebe2c
	I0108 20:40:06.430752   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:06.430905   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:06.928574   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:06.928607   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:06.928621   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:06.928629   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:06.931208   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:06.931235   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:06.931245   31613 round_trippers.go:580]     Audit-Id: 07eabd7a-c7fe-4fe6-8e80-9e8cde5fdac5
	I0108 20:40:06.931254   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:06.931265   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:06.931273   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:06.931280   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:06.931288   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:06 GMT
	I0108 20:40:06.931617   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"514","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3167 chars]
	I0108 20:40:06.931947   31613 node_ready.go:58] node "multinode-340815-m02" has status "Ready":"False"
	I0108 20:40:07.428012   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:07.428036   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:07.428044   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:07.428050   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:07.430738   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:07.430771   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:07.430781   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:07 GMT
	I0108 20:40:07.430791   31613 round_trippers.go:580]     Audit-Id: 7ab804fc-7eae-429d-b586-343bbbb37465
	I0108 20:40:07.430799   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:07.430807   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:07.430815   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:07.430823   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:07.430929   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"535","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3436 chars]
	I0108 20:40:07.928278   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:07.928302   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:07.928310   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:07.928316   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:07.931076   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:07.931100   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:07.931107   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:07.931113   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:07.931118   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:07 GMT
	I0108 20:40:07.931123   31613 round_trippers.go:580]     Audit-Id: ebcb6920-333a-4d14-b8ed-f0230e83472b
	I0108 20:40:07.931128   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:07.931133   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:07.931551   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"535","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3436 chars]
	I0108 20:40:08.428296   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:08.428324   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.428332   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.428338   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.431526   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:40:08.431577   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.431593   31613 round_trippers.go:580]     Audit-Id: 4d10b998-46ee-4e31-9144-7acb2670cffd
	I0108 20:40:08.431601   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.431609   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.431616   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.431623   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.431630   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.431831   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"539","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I0108 20:40:08.432183   31613 node_ready.go:49] node "multinode-340815-m02" has status "Ready":"True"
	I0108 20:40:08.432206   31613 node_ready.go:38] duration metric: took 10.004814405s waiting for node "multinode-340815-m02" to be "Ready" ...
	I0108 20:40:08.432217   31613 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:40:08.432294   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:40:08.432316   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.432327   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.432337   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.436290   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:40:08.436309   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.436323   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.436329   31613 round_trippers.go:580]     Audit-Id: 171ed08c-a68b-4d45-b3f9-3f9927257f6f
	I0108 20:40:08.436334   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.436345   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.436354   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.436362   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.437893   31613 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"540"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"408","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67364 chars]
	I0108 20:40:08.440760   31613 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:08.440863   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:40:08.440874   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.440885   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.440896   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.443531   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:08.443552   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.443561   31613 round_trippers.go:580]     Audit-Id: 9d289ce9-438f-4a2b-b2ee-a173508b5e47
	I0108 20:40:08.443569   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.443576   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.443583   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.443592   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.443599   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.443743   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"408","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0108 20:40:08.444319   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:40:08.444330   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.444346   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.444355   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.446768   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:08.446791   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.446798   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.446804   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.446809   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.446814   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.446819   31613 round_trippers.go:580]     Audit-Id: be507138-9ff5-42ea-9e3d-fee2b78e5271
	I0108 20:40:08.446825   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.446955   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:40:08.447290   31613 pod_ready.go:92] pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace has status "Ready":"True"
	I0108 20:40:08.447306   31613 pod_ready.go:81] duration metric: took 6.516939ms waiting for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:08.447320   31613 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:08.447386   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-340815
	I0108 20:40:08.447394   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.447401   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.447406   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.449663   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:08.449683   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.449693   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.449701   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.449710   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.449725   31613 round_trippers.go:580]     Audit-Id: 9738d0ec-eb5a-48af-b54a-be8f0759ca7e
	I0108 20:40:08.449740   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.449748   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.449925   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-340815","namespace":"kube-system","uid":"c6d1e2c4-6dbc-4495-ac68-c4b030195c2c","resourceVersion":"404","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.196:2379","kubernetes.io/config.hash":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.mirror":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.seen":"2024-01-08T20:38:05.870869333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0108 20:40:08.450326   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:40:08.450339   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.450346   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.450352   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.452638   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:08.452656   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.452664   31613 round_trippers.go:580]     Audit-Id: bf5c8427-442b-48f5-a631-73d01f1ba518
	I0108 20:40:08.452672   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.452680   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.452688   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.452696   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.452703   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.452876   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:40:08.453257   31613 pod_ready.go:92] pod "etcd-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:40:08.453282   31613 pod_ready.go:81] duration metric: took 5.95433ms waiting for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:08.453307   31613 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:08.453393   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-340815
	I0108 20:40:08.453404   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.453414   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.453423   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.456360   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:08.456375   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.456390   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.456395   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.456406   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.456417   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.456436   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.456443   31613 round_trippers.go:580]     Audit-Id: 908607a3-a3b2-445a-a74e-8199f335f6b1
	I0108 20:40:08.456995   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-340815","namespace":"kube-system","uid":"523b3dcf-2fae-43b4-a9c6-cd2337ae6d6f","resourceVersion":"405","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.196:8443","kubernetes.io/config.hash":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.mirror":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.seen":"2024-01-08T20:38:05.870870627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0108 20:40:08.457399   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:40:08.457414   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.457421   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.457426   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.459546   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:08.459564   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.459573   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.459581   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.459589   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.459601   31613 round_trippers.go:580]     Audit-Id: b2e88756-9522-44a6-ada2-9b3aee7a8b47
	I0108 20:40:08.459609   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.459616   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.459747   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:40:08.460148   31613 pod_ready.go:92] pod "kube-apiserver-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:40:08.460166   31613 pod_ready.go:81] duration metric: took 6.851108ms waiting for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:08.460176   31613 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:08.460248   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-340815
	I0108 20:40:08.460258   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.460265   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.460273   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.463364   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:40:08.463383   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.463393   31613 round_trippers.go:580]     Audit-Id: b7d0d8b8-51a9-4a49-ad74-bde9bffcd7df
	I0108 20:40:08.463401   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.463414   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.463421   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.463428   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.463437   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.463954   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-340815","namespace":"kube-system","uid":"3b29ca3f-d23b-4add-a5fb-d59381398862","resourceVersion":"406","creationTimestamp":"2024-01-08T20:38:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.mirror":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.seen":"2024-01-08T20:37:56.785419514Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0108 20:40:08.464400   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:40:08.464414   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.464421   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.464427   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.467068   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:08.467083   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.467089   31613 round_trippers.go:580]     Audit-Id: e5c9550a-b284-480f-a6b2-6a7fa0d41121
	I0108 20:40:08.467094   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.467099   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.467106   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.467114   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.467123   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.467704   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:40:08.468025   31613 pod_ready.go:92] pod "kube-controller-manager-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:40:08.468043   31613 pod_ready.go:81] duration metric: took 7.860201ms waiting for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:08.468052   31613 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j5w6d" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:08.628390   31613 request.go:629] Waited for 160.283259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:40:08.628456   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:40:08.628461   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.628470   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.628476   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.631241   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:08.631261   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.631268   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.631274   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.631279   31613 round_trippers.go:580]     Audit-Id: 3f34aa2b-5f38-4154-a8a5-8cb310cccef2
	I0108 20:40:08.631284   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.631289   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.631294   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.631545   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5w6d","generateName":"kube-proxy-","namespace":"kube-system","uid":"61568130-b69e-48ce-86f0-9a9e63ed99ab","resourceVersion":"522","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0108 20:40:08.829313   31613 request.go:629] Waited for 197.251273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:08.829437   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:40:08.829457   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:08.829466   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:08.829474   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:08.832305   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:08.832331   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:08.832349   31613 round_trippers.go:580]     Audit-Id: 15308d20-f40d-4ff6-af16-1e123759ece4
	I0108 20:40:08.832358   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:08.832366   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:08.832375   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:08.832383   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:08.832393   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:08 GMT
	I0108 20:40:08.832624   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"539","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_39_57_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 3253 chars]
	I0108 20:40:08.833068   31613 pod_ready.go:92] pod "kube-proxy-j5w6d" in "kube-system" namespace has status "Ready":"True"
	I0108 20:40:08.833098   31613 pod_ready.go:81] duration metric: took 365.039639ms waiting for pod "kube-proxy-j5w6d" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:08.833108   31613 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:09.029222   31613 request.go:629] Waited for 196.018412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9xrv
	I0108 20:40:09.029277   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9xrv
	I0108 20:40:09.029283   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:09.029291   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:09.029296   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:09.032070   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:09.032101   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:09.032109   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:09.032115   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:09 GMT
	I0108 20:40:09.032120   31613 round_trippers.go:580]     Audit-Id: a06da931-448c-4359-8c65-5af1e459634a
	I0108 20:40:09.032124   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:09.032130   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:09.032135   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:09.032756   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z9xrv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a0843325-2adf-4c2f-8489-067554648b52","resourceVersion":"377","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 20:40:09.228487   31613 request.go:629] Waited for 195.31904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:40:09.228570   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:40:09.228576   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:09.228586   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:09.228593   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:09.231782   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:40:09.231806   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:09.231813   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:09.231819   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:09.231825   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:09.231830   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:09 GMT
	I0108 20:40:09.231835   31613 round_trippers.go:580]     Audit-Id: dbfc2aef-555b-4d95-bf7c-8357def975a0
	I0108 20:40:09.231840   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:09.232046   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:40:09.232407   31613 pod_ready.go:92] pod "kube-proxy-z9xrv" in "kube-system" namespace has status "Ready":"True"
	I0108 20:40:09.232428   31613 pod_ready.go:81] duration metric: took 399.313985ms waiting for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:09.232437   31613 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:09.428403   31613 request.go:629] Waited for 195.880507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:40:09.428482   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:40:09.428490   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:09.428501   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:09.428509   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:09.431458   31613 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:40:09.431483   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:09.431494   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:09.431503   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:09.431511   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:09 GMT
	I0108 20:40:09.431519   31613 round_trippers.go:580]     Audit-Id: c5391c4e-1afc-436b-af2a-b09ffca841b5
	I0108 20:40:09.431532   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:09.431540   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:09.431719   31613 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-340815","namespace":"kube-system","uid":"008c4fe8-78b1-4326-8452-215037af26d6","resourceVersion":"403","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.mirror":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.seen":"2024-01-08T20:38:05.870865233Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0108 20:40:09.628435   31613 request.go:629] Waited for 196.302005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:40:09.628517   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:40:09.628529   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:09.628540   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:09.628549   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:09.631620   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:40:09.631651   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:09.631661   31613 round_trippers.go:580]     Audit-Id: 68cc8e99-36f7-4e61-8e3d-8a3e4b86dd99
	I0108 20:40:09.631669   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:09.631678   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:09.631686   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:09.631693   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:09.631700   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:09 GMT
	I0108 20:40:09.631876   31613 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0108 20:40:09.632313   31613 pod_ready.go:92] pod "kube-scheduler-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:40:09.632335   31613 pod_ready.go:81] duration metric: took 399.892737ms waiting for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:40:09.632345   31613 pod_ready.go:38] duration metric: took 1.200118198s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:40:09.632356   31613 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:40:09.632412   31613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:40:09.646819   31613 system_svc.go:56] duration metric: took 14.453053ms WaitForService to wait for kubelet.
	I0108 20:40:09.646849   31613 kubeadm.go:581] duration metric: took 11.243191484s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:40:09.646875   31613 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:40:09.829317   31613 request.go:629] Waited for 182.373109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0108 20:40:09.829366   31613 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0108 20:40:09.829376   31613 round_trippers.go:469] Request Headers:
	I0108 20:40:09.829387   31613 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:40:09.829397   31613 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:40:09.832540   31613 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:40:09.832562   31613 round_trippers.go:577] Response Headers:
	I0108 20:40:09.832569   31613 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:40:09.832574   31613 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:40:09 GMT
	I0108 20:40:09.832579   31613 round_trippers.go:580]     Audit-Id: 7f45c115-3aff-4c21-b75d-0924df2b0226
	I0108 20:40:09.832587   31613 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:40:09.832592   31613 round_trippers.go:580]     Content-Type: application/json
	I0108 20:40:09.832597   31613 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:40:09.832926   31613 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"540"},"items":[{"metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"387","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 10197 chars]
	I0108 20:40:09.833341   31613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:40:09.833360   31613 node_conditions.go:123] node cpu capacity is 2
	I0108 20:40:09.833369   31613 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:40:09.833373   31613 node_conditions.go:123] node cpu capacity is 2
	I0108 20:40:09.833377   31613 node_conditions.go:105] duration metric: took 186.496715ms to run NodePressure ...
	I0108 20:40:09.833388   31613 start.go:228] waiting for startup goroutines ...
	I0108 20:40:09.833411   31613 start.go:242] writing updated cluster config ...
	I0108 20:40:09.833673   31613 ssh_runner.go:195] Run: rm -f paused
	I0108 20:40:09.881451   31613 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 20:40:09.884540   31613 out.go:177] * Done! kubectl is now configured to use "multinode-340815" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 20:37:33 UTC, ends at Mon 2024-01-08 20:40:19 UTC. --
	Jan 08 20:40:18 multinode-340815 crio[719]: time="2024-01-08 20:40:18.962476783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6b6df317-8a33-4072-96fd-3023c91e444e name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:40:18 multinode-340815 crio[719]: time="2024-01-08 20:40:18.962686834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6825d2c7a5b88721bfb05c58f21f2868fcf98e6bede42566c14512e2d366b23c,PodSandboxId:1d05af9179c9a516065d711a5c061ba5bce63fd1064ec57ef8c3c780b9d5c2ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704746414297559923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-npzdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fdfd80ec-9054-4a2c-b7f6-a912162b80a6,},Annotations:map[string]string{io.kubernetes.container.hash: cca2d931,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0321d40cd42323f19835b33f83d74fb7675aeb3d375b2aa40967ee3833f10e9,PodSandboxId:5c398305a871f1665eb79d6ea432e1ae26fee6f2a5b8c409244822e84fb79112,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704746305299640480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h4v6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1ccbb8-1747-4b6f-b40c-c54670e49d54,},Annotations:map[string]string{io.kubernetes.container.hash: c7a8decd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b924a3d64aef3e27d0ba2b9823f301abaf116ac5a38bc331d827e601b9dfcc0,PodSandboxId:4daf94636a1cd717a45139d5e68d340cab8a5b9fe56815cefb00a467034d0365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704746305051056482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e896eb6a58a6cd605cd9cce80cbfcd20ab97f654e9f02aad9a257cf1495fa997,PodSandboxId:33850c14af5d48904d4344ba991b6e4dbebd012ea5a22c1c0f7909a7fa40cd00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704746302512344150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h48qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 65d532d3-b3ca-493d-b287-1b03dbdad538,},Annotations:map[string]string{io.kubernetes.container.hash: ac4d424e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5760fd83bac99a63bb6a093d3345e9c1a9f240b6b8f12dc7162afb5453064b87,PodSandboxId:0d7543dc8bf841fea9d6cb992b09ac02719c8b5c7770d8e1ef4c256524f3f97d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704746300057271331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9xrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0843325-2adf-4c2f-8489-0675546
48b52,},Annotations:map[string]string{io.kubernetes.container.hash: 91a148c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7818ec00f54e35ede7b87894cf8ac5c7a56c72d20b2ac0d4200e9ac60b7c86d6,PodSandboxId:53b0c81b3fee4ee12d0b8df321123c3b5cb0b33e682d4308fd1089a02b956974,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704746278573107450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c87b92132627dab75791d3cff759e12,},Anno
tations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc6e9b5a3e47fd6be387b84c3af012c7bfb77c18249c61578407026a0844df,PodSandboxId:33a68ea4b6e5b053d940b52fa3c72139bf513606e0f6a9909920cb71c839c808,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704746278159970758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f741652d6560a2
396658aaab123d801,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d9f28673970bf79fd9a9c51f4e67226ee46049c61ca7fb100a21b0172fa8ff,PodSandboxId:6696202cf0e45a685ed5c831a0e913a2de15a66880e0e568c9d4aa2b9b599261,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704746277857953350,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84677478c7d9bd76d7500f07832cd213,},Annotations:map[string]string{io
.kubernetes.container.hash: c58e30cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d2d5342a010b354049254f307f86def47f9969d4181dee8e0a32622e57feea,PodSandboxId:e722d6ac216556a5655fc53658c7c9900c59f995759582323fcdc000fc2866e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704746277755961234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9f4acc9b0ffa502cc0493a6d857b92,},Annotations:map[string]string{io.kubernetes.
container.hash: 22dbb42a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6b6df317-8a33-4072-96fd-3023c91e444e name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.006837607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3a5ea2fc-7a02-4b66-8dbd-be050e27e403 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.006920625Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3a5ea2fc-7a02-4b66-8dbd-be050e27e403 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.008050834Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=420578e7-f1cb-41f3-a2b8-315ef9d94b68 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.008437368Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704746419008423750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=420578e7-f1cb-41f3-a2b8-315ef9d94b68 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.009449777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=17be536c-234a-4723-bb4d-575cd4aceb10 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.009497963Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=17be536c-234a-4723-bb4d-575cd4aceb10 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.009687265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6825d2c7a5b88721bfb05c58f21f2868fcf98e6bede42566c14512e2d366b23c,PodSandboxId:1d05af9179c9a516065d711a5c061ba5bce63fd1064ec57ef8c3c780b9d5c2ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704746414297559923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-npzdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fdfd80ec-9054-4a2c-b7f6-a912162b80a6,},Annotations:map[string]string{io.kubernetes.container.hash: cca2d931,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0321d40cd42323f19835b33f83d74fb7675aeb3d375b2aa40967ee3833f10e9,PodSandboxId:5c398305a871f1665eb79d6ea432e1ae26fee6f2a5b8c409244822e84fb79112,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704746305299640480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h4v6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1ccbb8-1747-4b6f-b40c-c54670e49d54,},Annotations:map[string]string{io.kubernetes.container.hash: c7a8decd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b924a3d64aef3e27d0ba2b9823f301abaf116ac5a38bc331d827e601b9dfcc0,PodSandboxId:4daf94636a1cd717a45139d5e68d340cab8a5b9fe56815cefb00a467034d0365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704746305051056482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e896eb6a58a6cd605cd9cce80cbfcd20ab97f654e9f02aad9a257cf1495fa997,PodSandboxId:33850c14af5d48904d4344ba991b6e4dbebd012ea5a22c1c0f7909a7fa40cd00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704746302512344150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h48qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 65d532d3-b3ca-493d-b287-1b03dbdad538,},Annotations:map[string]string{io.kubernetes.container.hash: ac4d424e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5760fd83bac99a63bb6a093d3345e9c1a9f240b6b8f12dc7162afb5453064b87,PodSandboxId:0d7543dc8bf841fea9d6cb992b09ac02719c8b5c7770d8e1ef4c256524f3f97d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704746300057271331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9xrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0843325-2adf-4c2f-8489-0675546
48b52,},Annotations:map[string]string{io.kubernetes.container.hash: 91a148c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7818ec00f54e35ede7b87894cf8ac5c7a56c72d20b2ac0d4200e9ac60b7c86d6,PodSandboxId:53b0c81b3fee4ee12d0b8df321123c3b5cb0b33e682d4308fd1089a02b956974,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704746278573107450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c87b92132627dab75791d3cff759e12,},Anno
tations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc6e9b5a3e47fd6be387b84c3af012c7bfb77c18249c61578407026a0844df,PodSandboxId:33a68ea4b6e5b053d940b52fa3c72139bf513606e0f6a9909920cb71c839c808,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704746278159970758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f741652d6560a2
396658aaab123d801,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d9f28673970bf79fd9a9c51f4e67226ee46049c61ca7fb100a21b0172fa8ff,PodSandboxId:6696202cf0e45a685ed5c831a0e913a2de15a66880e0e568c9d4aa2b9b599261,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704746277857953350,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84677478c7d9bd76d7500f07832cd213,},Annotations:map[string]string{io
.kubernetes.container.hash: c58e30cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d2d5342a010b354049254f307f86def47f9969d4181dee8e0a32622e57feea,PodSandboxId:e722d6ac216556a5655fc53658c7c9900c59f995759582323fcdc000fc2866e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704746277755961234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9f4acc9b0ffa502cc0493a6d857b92,},Annotations:map[string]string{io.kubernetes.
container.hash: 22dbb42a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=17be536c-234a-4723-bb4d-575cd4aceb10 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.052831526Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a66b4283-b285-4dd0-becf-5a2b1413e903 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.052919108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a66b4283-b285-4dd0-becf-5a2b1413e903 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.054266083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9be6d1cc-d653-4fcb-ab43-77edf7638a39 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.054656805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704746419054645127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9be6d1cc-d653-4fcb-ab43-77edf7638a39 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.055367316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7f6f56d6-d9f3-4315-8996-836b9e7b67dc name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.055443454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7f6f56d6-d9f3-4315-8996-836b9e7b67dc name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.055650768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6825d2c7a5b88721bfb05c58f21f2868fcf98e6bede42566c14512e2d366b23c,PodSandboxId:1d05af9179c9a516065d711a5c061ba5bce63fd1064ec57ef8c3c780b9d5c2ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704746414297559923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-npzdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fdfd80ec-9054-4a2c-b7f6-a912162b80a6,},Annotations:map[string]string{io.kubernetes.container.hash: cca2d931,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0321d40cd42323f19835b33f83d74fb7675aeb3d375b2aa40967ee3833f10e9,PodSandboxId:5c398305a871f1665eb79d6ea432e1ae26fee6f2a5b8c409244822e84fb79112,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704746305299640480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h4v6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1ccbb8-1747-4b6f-b40c-c54670e49d54,},Annotations:map[string]string{io.kubernetes.container.hash: c7a8decd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b924a3d64aef3e27d0ba2b9823f301abaf116ac5a38bc331d827e601b9dfcc0,PodSandboxId:4daf94636a1cd717a45139d5e68d340cab8a5b9fe56815cefb00a467034d0365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704746305051056482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e896eb6a58a6cd605cd9cce80cbfcd20ab97f654e9f02aad9a257cf1495fa997,PodSandboxId:33850c14af5d48904d4344ba991b6e4dbebd012ea5a22c1c0f7909a7fa40cd00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704746302512344150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h48qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 65d532d3-b3ca-493d-b287-1b03dbdad538,},Annotations:map[string]string{io.kubernetes.container.hash: ac4d424e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5760fd83bac99a63bb6a093d3345e9c1a9f240b6b8f12dc7162afb5453064b87,PodSandboxId:0d7543dc8bf841fea9d6cb992b09ac02719c8b5c7770d8e1ef4c256524f3f97d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704746300057271331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9xrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0843325-2adf-4c2f-8489-0675546
48b52,},Annotations:map[string]string{io.kubernetes.container.hash: 91a148c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7818ec00f54e35ede7b87894cf8ac5c7a56c72d20b2ac0d4200e9ac60b7c86d6,PodSandboxId:53b0c81b3fee4ee12d0b8df321123c3b5cb0b33e682d4308fd1089a02b956974,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704746278573107450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c87b92132627dab75791d3cff759e12,},Anno
tations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc6e9b5a3e47fd6be387b84c3af012c7bfb77c18249c61578407026a0844df,PodSandboxId:33a68ea4b6e5b053d940b52fa3c72139bf513606e0f6a9909920cb71c839c808,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704746278159970758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f741652d6560a2
396658aaab123d801,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d9f28673970bf79fd9a9c51f4e67226ee46049c61ca7fb100a21b0172fa8ff,PodSandboxId:6696202cf0e45a685ed5c831a0e913a2de15a66880e0e568c9d4aa2b9b599261,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704746277857953350,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84677478c7d9bd76d7500f07832cd213,},Annotations:map[string]string{io
.kubernetes.container.hash: c58e30cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d2d5342a010b354049254f307f86def47f9969d4181dee8e0a32622e57feea,PodSandboxId:e722d6ac216556a5655fc53658c7c9900c59f995759582323fcdc000fc2866e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704746277755961234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9f4acc9b0ffa502cc0493a6d857b92,},Annotations:map[string]string{io.kubernetes.
container.hash: 22dbb42a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7f6f56d6-d9f3-4315-8996-836b9e7b67dc name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.098782803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e0e36805-5376-4268-b15a-8daf611db6fb name=/runtime.v1.RuntimeService/Version
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.098872176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e0e36805-5376-4268-b15a-8daf611db6fb name=/runtime.v1.RuntimeService/Version
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.099939607Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=38b1f990-57e3-410a-9057-4998b4b83713 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.100317959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704746419100303857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=38b1f990-57e3-410a-9057-4998b4b83713 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.100797606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=51e12676-a6ff-465a-b694-2cf6609354cd name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.100873151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=51e12676-a6ff-465a-b694-2cf6609354cd name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.101088466Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6825d2c7a5b88721bfb05c58f21f2868fcf98e6bede42566c14512e2d366b23c,PodSandboxId:1d05af9179c9a516065d711a5c061ba5bce63fd1064ec57ef8c3c780b9d5c2ff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704746414297559923,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-npzdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fdfd80ec-9054-4a2c-b7f6-a912162b80a6,},Annotations:map[string]string{io.kubernetes.container.hash: cca2d931,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0321d40cd42323f19835b33f83d74fb7675aeb3d375b2aa40967ee3833f10e9,PodSandboxId:5c398305a871f1665eb79d6ea432e1ae26fee6f2a5b8c409244822e84fb79112,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704746305299640480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h4v6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1ccbb8-1747-4b6f-b40c-c54670e49d54,},Annotations:map[string]string{io.kubernetes.container.hash: c7a8decd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b924a3d64aef3e27d0ba2b9823f301abaf116ac5a38bc331d827e601b9dfcc0,PodSandboxId:4daf94636a1cd717a45139d5e68d340cab8a5b9fe56815cefb00a467034d0365,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704746305051056482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e896eb6a58a6cd605cd9cce80cbfcd20ab97f654e9f02aad9a257cf1495fa997,PodSandboxId:33850c14af5d48904d4344ba991b6e4dbebd012ea5a22c1c0f7909a7fa40cd00,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704746302512344150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h48qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 65d532d3-b3ca-493d-b287-1b03dbdad538,},Annotations:map[string]string{io.kubernetes.container.hash: ac4d424e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5760fd83bac99a63bb6a093d3345e9c1a9f240b6b8f12dc7162afb5453064b87,PodSandboxId:0d7543dc8bf841fea9d6cb992b09ac02719c8b5c7770d8e1ef4c256524f3f97d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704746300057271331,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9xrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0843325-2adf-4c2f-8489-0675546
48b52,},Annotations:map[string]string{io.kubernetes.container.hash: 91a148c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7818ec00f54e35ede7b87894cf8ac5c7a56c72d20b2ac0d4200e9ac60b7c86d6,PodSandboxId:53b0c81b3fee4ee12d0b8df321123c3b5cb0b33e682d4308fd1089a02b956974,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704746278573107450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c87b92132627dab75791d3cff759e12,},Anno
tations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8efc6e9b5a3e47fd6be387b84c3af012c7bfb77c18249c61578407026a0844df,PodSandboxId:33a68ea4b6e5b053d940b52fa3c72139bf513606e0f6a9909920cb71c839c808,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704746278159970758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f741652d6560a2
396658aaab123d801,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31d9f28673970bf79fd9a9c51f4e67226ee46049c61ca7fb100a21b0172fa8ff,PodSandboxId:6696202cf0e45a685ed5c831a0e913a2de15a66880e0e568c9d4aa2b9b599261,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704746277857953350,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84677478c7d9bd76d7500f07832cd213,},Annotations:map[string]string{io
.kubernetes.container.hash: c58e30cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0d2d5342a010b354049254f307f86def47f9969d4181dee8e0a32622e57feea,PodSandboxId:e722d6ac216556a5655fc53658c7c9900c59f995759582323fcdc000fc2866e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704746277755961234,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9f4acc9b0ffa502cc0493a6d857b92,},Annotations:map[string]string{io.kubernetes.
container.hash: 22dbb42a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=51e12676-a6ff-465a-b694-2cf6609354cd name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.101932999Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="go-grpc-middleware/chain.go:25" id=91143a27-fa49-432a-ae53-efbedf0c7a25 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:40:19 multinode-340815 crio[719]: time="2024-01-08 20:40:19.101987864Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=91143a27-fa49-432a-ae53-efbedf0c7a25 name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6825d2c7a5b88       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   1d05af9179c9a       busybox-5bc68d56bd-npzdk
	d0321d40cd423       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   0                   5c398305a871f       coredns-5dd5756b68-h4v6v
	1b924a3d64aef       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   4daf94636a1cd       storage-provisioner
	e896eb6a58a6c       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   33850c14af5d4       kindnet-h48qs
	5760fd83bac99       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   0d7543dc8bf84       kube-proxy-z9xrv
	7818ec00f54e3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            0                   53b0c81b3fee4       kube-scheduler-multinode-340815
	8efc6e9b5a3e4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Running             kube-controller-manager   0                   33a68ea4b6e5b       kube-controller-manager-multinode-340815
	31d9f28673970       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      0                   6696202cf0e45       etcd-multinode-340815
	f0d2d5342a010       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Running             kube-apiserver            0                   e722d6ac21655       kube-apiserver-multinode-340815
	
	
	==> coredns [d0321d40cd42323f19835b33f83d74fb7675aeb3d375b2aa40967ee3833f10e9] <==
	[INFO] 10.244.1.2:46097 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000192076s
	[INFO] 10.244.0.3:45684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101067s
	[INFO] 10.244.0.3:43177 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001877744s
	[INFO] 10.244.0.3:34956 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087546s
	[INFO] 10.244.0.3:45784 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078658s
	[INFO] 10.244.0.3:42083 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001454642s
	[INFO] 10.244.0.3:35982 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077662s
	[INFO] 10.244.0.3:42050 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066568s
	[INFO] 10.244.0.3:45060 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00016519s
	[INFO] 10.244.1.2:40984 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205413s
	[INFO] 10.244.1.2:37728 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158493s
	[INFO] 10.244.1.2:45250 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148154s
	[INFO] 10.244.1.2:49794 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167384s
	[INFO] 10.244.0.3:45330 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100247s
	[INFO] 10.244.0.3:57873 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116629s
	[INFO] 10.244.0.3:39506 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072281s
	[INFO] 10.244.0.3:43490 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091383s
	[INFO] 10.244.1.2:57576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015863s
	[INFO] 10.244.1.2:40780 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000187575s
	[INFO] 10.244.1.2:37343 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000145326s
	[INFO] 10.244.1.2:43765 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000195183s
	[INFO] 10.244.0.3:58887 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000073969s
	[INFO] 10.244.0.3:37685 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000045416s
	[INFO] 10.244.0.3:58242 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134092s
	[INFO] 10.244.0.3:58945 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000043979s
	
	
	==> describe nodes <==
	Name:               multinode-340815
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-340815
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-340815
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_38_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:38:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-340815
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:40:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:38:24 +0000   Mon, 08 Jan 2024 20:37:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:38:24 +0000   Mon, 08 Jan 2024 20:37:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:38:24 +0000   Mon, 08 Jan 2024 20:37:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:38:24 +0000   Mon, 08 Jan 2024 20:38:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    multinode-340815
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 686b856db38c4ec1b793361572ee285f
	  System UUID:                686b856d-b38c-4ec1-b793-361572ee285f
	  Boot ID:                    3a93fcf2-aeb3-41bd-bf12-0e92cf91c8b1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-npzdk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5dd5756b68-h4v6v                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m1s
	  kube-system                 etcd-multinode-340815                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m13s
	  kube-system                 kindnet-h48qs                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m1s
	  kube-system                 kube-apiserver-multinode-340815             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-controller-manager-multinode-340815    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  kube-system                 kube-proxy-z9xrv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 kube-scheduler-multinode-340815             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 119s   kube-proxy       
	  Normal  Starting                 2m14s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s  kubelet          Node multinode-340815 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s  kubelet          Node multinode-340815 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s  kubelet          Node multinode-340815 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m2s   node-controller  Node multinode-340815 event: Registered Node multinode-340815 in Controller
	  Normal  NodeReady                115s   kubelet          Node multinode-340815 status is now: NodeReady
	
	
	Name:               multinode-340815-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-340815-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-340815
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T20_39_57_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:39:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-340815-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:40:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:40:07 +0000   Mon, 08 Jan 2024 20:39:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:40:07 +0000   Mon, 08 Jan 2024 20:39:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:40:07 +0000   Mon, 08 Jan 2024 20:39:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:40:07 +0000   Mon, 08 Jan 2024 20:40:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-340815-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6eff253b55e94982ab242ed793ce3707
	  System UUID:                6eff253b-55e9-4982-ab24-2ed793ce3707
	  Boot ID:                    aa127115-c411-412c-9353-fe16e6dae98a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-95tbd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-tqjx8               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22s
	  kube-system                 kube-proxy-j5w6d            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18s                kube-proxy       
	  Normal  RegisteredNode           22s                node-controller  Node multinode-340815-m02 event: Registered Node multinode-340815-m02 in Controller
	  Normal  NodeHasSufficientMemory  22s (x5 over 24s)  kubelet          Node multinode-340815-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x5 over 24s)  kubelet          Node multinode-340815-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x5 over 24s)  kubelet          Node multinode-340815-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12s                kubelet          Node multinode-340815-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan 8 20:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068091] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.395745] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.418362] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.140727] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.080081] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.955542] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.102647] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.131617] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.103683] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.211109] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +9.828879] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[Jan 8 20:38] systemd-fstab-generator[1264]: Ignoring "noauto" for root device
	[ +20.573494] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [31d9f28673970bf79fd9a9c51f4e67226ee46049c61ca7fb100a21b0172fa8ff] <==
	{"level":"info","ts":"2024-01-08T20:37:59.830354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 switched to configuration voters=(11623670073473264757)"}
	{"level":"info","ts":"2024-01-08T20:37:59.830674Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","added-peer-id":"a14f9258d3b66c75","added-peer-peer-urls":["https://192.168.39.196:2380"]}
	{"level":"info","ts":"2024-01-08T20:37:59.836754Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T20:37:59.836945Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a14f9258d3b66c75","initial-advertise-peer-urls":["https://192.168.39.196:2380"],"listen-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.196:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T20:37:59.836988Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T20:37:59.837566Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-01-08T20:37:59.837666Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-01-08T20:38:00.779217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-08T20:38:00.77934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-08T20:38:00.779393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgPreVoteResp from a14f9258d3b66c75 at term 1"}
	{"level":"info","ts":"2024-01-08T20:38:00.779433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T20:38:00.779457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgVoteResp from a14f9258d3b66c75 at term 2"}
	{"level":"info","ts":"2024-01-08T20:38:00.779487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became leader at term 2"}
	{"level":"info","ts":"2024-01-08T20:38:00.779512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a14f9258d3b66c75 elected leader a14f9258d3b66c75 at term 2"}
	{"level":"info","ts":"2024-01-08T20:38:00.781485Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:38:00.782817Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a14f9258d3b66c75","local-member-attributes":"{Name:multinode-340815 ClientURLs:[https://192.168.39.196:2379]}","request-path":"/0/members/a14f9258d3b66c75/attributes","cluster-id":"8309c60c27e527a4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T20:38:00.78388Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:38:00.784007Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:38:00.784203Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:38:00.784257Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:38:00.784315Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T20:38:00.78434Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T20:38:00.784364Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:38:00.785149Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T20:38:00.785427Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.196:2379"}
	
	
	==> kernel <==
	 20:40:19 up 2 min,  0 users,  load average: 0.18, 0.19, 0.08
	Linux multinode-340815 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [e896eb6a58a6cd605cd9cce80cbfcd20ab97f654e9f02aad9a257cf1495fa997] <==
	I0108 20:38:44.068100       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:38:44.068153       1 main.go:227] handling current node
	I0108 20:38:54.080164       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:38:54.080252       1 main.go:227] handling current node
	I0108 20:39:04.094856       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:39:04.094986       1 main.go:227] handling current node
	I0108 20:39:14.109437       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:39:14.109632       1 main.go:227] handling current node
	I0108 20:39:24.114153       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:39:24.114239       1 main.go:227] handling current node
	I0108 20:39:34.128326       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:39:34.128387       1 main.go:227] handling current node
	I0108 20:39:44.138273       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:39:44.138378       1 main.go:227] handling current node
	I0108 20:39:54.145176       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:39:54.145244       1 main.go:227] handling current node
	I0108 20:40:04.159426       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:40:04.159539       1 main.go:227] handling current node
	I0108 20:40:04.159559       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0108 20:40:04.159565       1 main.go:250] Node multinode-340815-m02 has CIDR [10.244.1.0/24] 
	I0108 20:40:04.160870       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.78 Flags: [] Table: 0} 
	I0108 20:40:14.174160       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:40:14.174220       1 main.go:227] handling current node
	I0108 20:40:14.174252       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0108 20:40:14.174262       1 main.go:250] Node multinode-340815-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [f0d2d5342a010b354049254f307f86def47f9969d4181dee8e0a32622e57feea] <==
	I0108 20:38:02.426908       1 aggregator.go:166] initial CRD sync complete...
	I0108 20:38:02.426930       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 20:38:02.426950       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 20:38:02.426971       1 cache.go:39] Caches are synced for autoregister controller
	I0108 20:38:02.427171       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 20:38:02.440528       1 controller.go:624] quota admission added evaluator for: namespaces
	I0108 20:38:02.482204       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 20:38:02.505199       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 20:38:02.505203       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0108 20:38:02.508520       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0108 20:38:03.311634       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 20:38:03.323503       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 20:38:03.323550       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 20:38:04.049859       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 20:38:04.103340       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 20:38:04.231448       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 20:38:04.241863       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.196]
	I0108 20:38:04.242957       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 20:38:04.248262       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 20:38:04.397455       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 20:38:05.733140       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 20:38:05.757373       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 20:38:05.773353       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 20:38:18.009350       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0108 20:38:18.208101       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8efc6e9b5a3e47fd6be387b84c3af012c7bfb77c18249c61578407026a0844df] <==
	I0108 20:38:18.997540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="173.539µs"
	I0108 20:38:24.256344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.645µs"
	I0108 20:38:24.307133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.163µs"
	I0108 20:38:26.110460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.273558ms"
	I0108 20:38:26.112548       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.928µs"
	I0108 20:38:27.363664       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0108 20:39:57.189987       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-340815-m02\" does not exist"
	I0108 20:39:57.217116       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tqjx8"
	I0108 20:39:57.222118       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-340815-m02" podCIDRs=["10.244.1.0/24"]
	I0108 20:39:57.224018       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-j5w6d"
	I0108 20:39:57.380659       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-340815-m02"
	I0108 20:39:57.381053       1 event.go:307] "Event occurred" object="multinode-340815-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-340815-m02 event: Registered Node multinode-340815-m02 in Controller"
	I0108 20:40:07.970007       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-340815-m02"
	I0108 20:40:10.662498       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0108 20:40:10.684000       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-95tbd"
	I0108 20:40:10.703348       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-npzdk"
	I0108 20:40:10.717992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.948681ms"
	I0108 20:40:10.738050       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.944898ms"
	I0108 20:40:10.761564       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.40562ms"
	I0108 20:40:10.761825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="139.764µs"
	I0108 20:40:12.397991       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-95tbd" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-95tbd"
	I0108 20:40:14.883903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.943096ms"
	I0108 20:40:14.884404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="85.553µs"
	I0108 20:40:15.473158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.115446ms"
	I0108 20:40:15.473482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="78.438µs"
	
	
	==> kube-proxy [5760fd83bac99a63bb6a093d3345e9c1a9f240b6b8f12dc7162afb5453064b87] <==
	I0108 20:38:20.284364       1 server_others.go:69] "Using iptables proxy"
	I0108 20:38:20.300607       1 node.go:141] Successfully retrieved node IP: 192.168.39.196
	I0108 20:38:20.351266       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 20:38:20.351388       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 20:38:20.357775       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:38:20.357848       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:38:20.358075       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:38:20.358114       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:38:20.360279       1 config.go:315] "Starting node config controller"
	I0108 20:38:20.360361       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:38:20.360683       1 config.go:188] "Starting service config controller"
	I0108 20:38:20.360696       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:38:20.360806       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:38:20.360815       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:38:20.460949       1 shared_informer.go:318] Caches are synced for node config
	I0108 20:38:20.461007       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:38:20.461177       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7818ec00f54e35ede7b87894cf8ac5c7a56c72d20b2ac0d4200e9ac60b7c86d6] <==
	W0108 20:38:02.466901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:38:02.466938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 20:38:02.466982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:38:02.467016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 20:38:03.341651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 20:38:03.341814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 20:38:03.402911       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 20:38:03.403005       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 20:38:03.425839       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 20:38:03.425940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 20:38:03.471955       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 20:38:03.472041       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 20:38:03.531958       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 20:38:03.532045       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 20:38:03.613510       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 20:38:03.613563       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 20:38:03.725221       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 20:38:03.725276       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 20:38:03.737070       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 20:38:03.737129       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 20:38:03.738812       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 20:38:03.738859       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 20:38:03.855547       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 20:38:03.855642       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0108 20:38:05.753917       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 20:37:33 UTC, ends at Mon 2024-01-08 20:40:19 UTC. --
	Jan 08 20:38:18 multinode-340815 kubelet[1271]: I0108 20:38:18.566821    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65d532d3-b3ca-493d-b287-1b03dbdad538-xtables-lock\") pod \"kindnet-h48qs\" (UID: \"65d532d3-b3ca-493d-b287-1b03dbdad538\") " pod="kube-system/kindnet-h48qs"
	Jan 08 20:38:18 multinode-340815 kubelet[1271]: I0108 20:38:18.566840    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkr2s\" (UniqueName: \"kubernetes.io/projected/65d532d3-b3ca-493d-b287-1b03dbdad538-kube-api-access-rkr2s\") pod \"kindnet-h48qs\" (UID: \"65d532d3-b3ca-493d-b287-1b03dbdad538\") " pod="kube-system/kindnet-h48qs"
	Jan 08 20:38:18 multinode-340815 kubelet[1271]: I0108 20:38:18.566857    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsdfk\" (UniqueName: \"kubernetes.io/projected/a0843325-2adf-4c2f-8489-067554648b52-kube-api-access-lsdfk\") pod \"kube-proxy-z9xrv\" (UID: \"a0843325-2adf-4c2f-8489-067554648b52\") " pod="kube-system/kube-proxy-z9xrv"
	Jan 08 20:38:24 multinode-340815 kubelet[1271]: I0108 20:38:24.047684    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-z9xrv" podStartSLOduration=6.047626225 podCreationTimestamp="2024-01-08 20:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:38:21.036170599 +0000 UTC m=+15.326279240" watchObservedRunningTime="2024-01-08 20:38:24.047626225 +0000 UTC m=+18.337734867"
	Jan 08 20:38:24 multinode-340815 kubelet[1271]: I0108 20:38:24.198597    1271 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 08 20:38:24 multinode-340815 kubelet[1271]: I0108 20:38:24.240238    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-h48qs" podStartSLOduration=6.240203171 podCreationTimestamp="2024-01-08 20:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:38:24.048055505 +0000 UTC m=+18.338164149" watchObservedRunningTime="2024-01-08 20:38:24.240203171 +0000 UTC m=+18.530311948"
	Jan 08 20:38:24 multinode-340815 kubelet[1271]: I0108 20:38:24.240384    1271 topology_manager.go:215] "Topology Admit Handler" podUID="de357297-4bd9-4c71-ada5-ceace0d38cfb" podNamespace="kube-system" podName="storage-provisioner"
	Jan 08 20:38:24 multinode-340815 kubelet[1271]: I0108 20:38:24.246269    1271 topology_manager.go:215] "Topology Admit Handler" podUID="5c1ccbb8-1747-4b6f-b40c-c54670e49d54" podNamespace="kube-system" podName="coredns-5dd5756b68-h4v6v"
	Jan 08 20:38:24 multinode-340815 kubelet[1271]: I0108 20:38:24.308589    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwmdl\" (UniqueName: \"kubernetes.io/projected/5c1ccbb8-1747-4b6f-b40c-c54670e49d54-kube-api-access-kwmdl\") pod \"coredns-5dd5756b68-h4v6v\" (UID: \"5c1ccbb8-1747-4b6f-b40c-c54670e49d54\") " pod="kube-system/coredns-5dd5756b68-h4v6v"
	Jan 08 20:38:24 multinode-340815 kubelet[1271]: I0108 20:38:24.308801    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/de357297-4bd9-4c71-ada5-ceace0d38cfb-tmp\") pod \"storage-provisioner\" (UID: \"de357297-4bd9-4c71-ada5-ceace0d38cfb\") " pod="kube-system/storage-provisioner"
	Jan 08 20:38:24 multinode-340815 kubelet[1271]: I0108 20:38:24.308828    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fphc5\" (UniqueName: \"kubernetes.io/projected/de357297-4bd9-4c71-ada5-ceace0d38cfb-kube-api-access-fphc5\") pod \"storage-provisioner\" (UID: \"de357297-4bd9-4c71-ada5-ceace0d38cfb\") " pod="kube-system/storage-provisioner"
	Jan 08 20:38:24 multinode-340815 kubelet[1271]: I0108 20:38:24.308849    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c1ccbb8-1747-4b6f-b40c-c54670e49d54-config-volume\") pod \"coredns-5dd5756b68-h4v6v\" (UID: \"5c1ccbb8-1747-4b6f-b40c-c54670e49d54\") " pod="kube-system/coredns-5dd5756b68-h4v6v"
	Jan 08 20:38:26 multinode-340815 kubelet[1271]: I0108 20:38:26.094456    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.09441025 podCreationTimestamp="2024-01-08 20:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:38:26.079225452 +0000 UTC m=+20.369334123" watchObservedRunningTime="2024-01-08 20:38:26.09441025 +0000 UTC m=+20.384518914"
	Jan 08 20:39:05 multinode-340815 kubelet[1271]: E0108 20:39:05.947974    1271 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 20:39:05 multinode-340815 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 20:39:05 multinode-340815 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 20:39:05 multinode-340815 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 20:40:05 multinode-340815 kubelet[1271]: E0108 20:40:05.949076    1271 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 20:40:05 multinode-340815 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 20:40:05 multinode-340815 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 20:40:05 multinode-340815 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 20:40:10 multinode-340815 kubelet[1271]: I0108 20:40:10.723043    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h4v6v" podStartSLOduration=112.722974357 podCreationTimestamp="2024-01-08 20:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 20:38:26.095479317 +0000 UTC m=+20.385587961" watchObservedRunningTime="2024-01-08 20:40:10.722974357 +0000 UTC m=+125.013083000"
	Jan 08 20:40:10 multinode-340815 kubelet[1271]: I0108 20:40:10.723241    1271 topology_manager.go:215] "Topology Admit Handler" podUID="fdfd80ec-9054-4a2c-b7f6-a912162b80a6" podNamespace="default" podName="busybox-5bc68d56bd-npzdk"
	Jan 08 20:40:10 multinode-340815 kubelet[1271]: I0108 20:40:10.737148    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5nf5\" (UniqueName: \"kubernetes.io/projected/fdfd80ec-9054-4a2c-b7f6-a912162b80a6-kube-api-access-f5nf5\") pod \"busybox-5bc68d56bd-npzdk\" (UID: \"fdfd80ec-9054-4a2c-b7f6-a912162b80a6\") " pod="default/busybox-5bc68d56bd-npzdk"
	Jan 08 20:40:15 multinode-340815 kubelet[1271]: I0108 20:40:15.466143    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-npzdk" podStartSLOduration=2.846147518 podCreationTimestamp="2024-01-08 20:40:10 +0000 UTC" firstStartedPulling="2024-01-08 20:40:11.648577247 +0000 UTC m=+125.938685871" lastFinishedPulling="2024-01-08 20:40:14.26853662 +0000 UTC m=+128.558645258" observedRunningTime="2024-01-08 20:40:15.465077878 +0000 UTC m=+129.755186505" watchObservedRunningTime="2024-01-08 20:40:15.466106905 +0000 UTC m=+129.756215543"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-340815 -n multinode-340815
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-340815 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (708.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-340815
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-340815
E0108 20:41:59.477857   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-340815: exit status 82 (2m1.482035028s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-340815"  ...
	* Stopping node "multinode-340815"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-340815" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-340815 --wait=true -v=8 --alsologtostderr
E0108 20:44:26.820208   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:45:36.430002   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:46:04.517431   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:47:27.562983   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:49:26.819875   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:50:36.429168   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:50:49.869141   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:51:04.517339   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-340815 --wait=true -v=8 --alsologtostderr: (9m43.738617781s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-340815
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-340815 -n multinode-340815
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-340815 logs -n 25: (1.691326288s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-340815 ssh -n                                                                 | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-340815 cp multinode-340815-m02:/home/docker/cp-test.txt                       | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile686812324/001/cp-test_multinode-340815-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-340815 ssh -n                                                                 | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-340815 cp multinode-340815-m02:/home/docker/cp-test.txt                       | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815:/home/docker/cp-test_multinode-340815-m02_multinode-340815.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-340815 ssh -n                                                                 | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-340815 ssh -n multinode-340815 sudo cat                                       | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | /home/docker/cp-test_multinode-340815-m02_multinode-340815.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-340815 cp multinode-340815-m02:/home/docker/cp-test.txt                       | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815-m03:/home/docker/cp-test_multinode-340815-m02_multinode-340815-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-340815 ssh -n                                                                 | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-340815 ssh -n multinode-340815-m03 sudo cat                                   | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | /home/docker/cp-test_multinode-340815-m02_multinode-340815-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-340815 cp testdata/cp-test.txt                                                | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-340815 ssh -n                                                                 | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-340815 cp multinode-340815-m03:/home/docker/cp-test.txt                       | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile686812324/001/cp-test_multinode-340815-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-340815 ssh -n                                                                 | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-340815 cp multinode-340815-m03:/home/docker/cp-test.txt                       | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815:/home/docker/cp-test_multinode-340815-m03_multinode-340815.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-340815 ssh -n                                                                 | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-340815 ssh -n multinode-340815 sudo cat                                       | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | /home/docker/cp-test_multinode-340815-m03_multinode-340815.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-340815 cp multinode-340815-m03:/home/docker/cp-test.txt                       | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815-m02:/home/docker/cp-test_multinode-340815-m03_multinode-340815-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-340815 ssh -n                                                                 | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | multinode-340815-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-340815 ssh -n multinode-340815-m02 sudo cat                                   | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | /home/docker/cp-test_multinode-340815-m03_multinode-340815-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-340815 node stop m03                                                          | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	| node    | multinode-340815 node start                                                             | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC | 08 Jan 24 20:41 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-340815                                                                | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC |                     |
	| stop    | -p multinode-340815                                                                     | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:41 UTC |                     |
	| start   | -p multinode-340815                                                                     | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:43 UTC | 08 Jan 24 20:53 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-340815                                                                | multinode-340815 | jenkins | v1.32.0 | 08 Jan 24 20:53 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:43:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:43:51.511996   35097 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:43:51.512159   35097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:43:51.512171   35097 out.go:309] Setting ErrFile to fd 2...
	I0108 20:43:51.512179   35097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:43:51.512394   35097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 20:43:51.512945   35097 out.go:303] Setting JSON to false
	I0108 20:43:51.513814   35097 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5156,"bootTime":1704741476,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:43:51.513871   35097 start.go:138] virtualization: kvm guest
	I0108 20:43:51.516659   35097 out.go:177] * [multinode-340815] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:43:51.518442   35097 notify.go:220] Checking for updates...
	I0108 20:43:51.520354   35097 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:43:51.521874   35097 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:43:51.523555   35097 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:43:51.525083   35097 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:43:51.526821   35097 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:43:51.528783   35097 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:43:51.530962   35097 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:43:51.531073   35097 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:43:51.531509   35097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:43:51.531558   35097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:43:51.546740   35097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0108 20:43:51.547181   35097 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:43:51.547718   35097 main.go:141] libmachine: Using API Version  1
	I0108 20:43:51.547734   35097 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:43:51.548047   35097 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:43:51.548249   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:43:51.584961   35097 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 20:43:51.586680   35097 start.go:298] selected driver: kvm2
	I0108 20:43:51.586702   35097 start.go:902] validating driver "kvm2" against &{Name:multinode-340815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:43:51.586963   35097 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:43:51.587415   35097 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:43:51.587486   35097 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 20:43:51.602078   35097 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 20:43:51.602733   35097 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 20:43:51.602802   35097 cni.go:84] Creating CNI manager for ""
	I0108 20:43:51.602816   35097 cni.go:136] 3 nodes found, recommending kindnet
	I0108 20:43:51.602827   35097 start_flags.go:323] config:
	{Name:multinode-340815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-340815 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:43:51.603025   35097 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:43:51.606578   35097 out.go:177] * Starting control plane node multinode-340815 in cluster multinode-340815
	I0108 20:43:51.608399   35097 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:43:51.608465   35097 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 20:43:51.608476   35097 cache.go:56] Caching tarball of preloaded images
	I0108 20:43:51.608560   35097 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 20:43:51.608570   35097 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:43:51.608686   35097 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json ...
	I0108 20:43:51.608895   35097 start.go:365] acquiring machines lock for multinode-340815: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 20:43:51.608935   35097 start.go:369] acquired machines lock for "multinode-340815" in 21.466µs
	I0108 20:43:51.608949   35097 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:43:51.608954   35097 fix.go:54] fixHost starting: 
	I0108 20:43:51.609217   35097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:43:51.609248   35097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:43:51.622855   35097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
	I0108 20:43:51.623238   35097 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:43:51.623735   35097 main.go:141] libmachine: Using API Version  1
	I0108 20:43:51.623759   35097 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:43:51.624055   35097 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:43:51.624292   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:43:51.624468   35097 main.go:141] libmachine: (multinode-340815) Calling .GetState
	I0108 20:43:51.626054   35097 fix.go:102] recreateIfNeeded on multinode-340815: state=Running err=<nil>
	W0108 20:43:51.626086   35097 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:43:51.628562   35097 out.go:177] * Updating the running kvm2 "multinode-340815" VM ...
	I0108 20:43:51.630033   35097 machine.go:88] provisioning docker machine ...
	I0108 20:43:51.630056   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:43:51.630271   35097 main.go:141] libmachine: (multinode-340815) Calling .GetMachineName
	I0108 20:43:51.630434   35097 buildroot.go:166] provisioning hostname "multinode-340815"
	I0108 20:43:51.630455   35097 main.go:141] libmachine: (multinode-340815) Calling .GetMachineName
	I0108 20:43:51.630657   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:43:51.633364   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:43:51.633811   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:43:51.633845   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:43:51.634029   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:43:51.634206   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:43:51.634399   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:43:51.634545   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:43:51.634698   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:43:51.635020   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0108 20:43:51.635033   35097 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-340815 && echo "multinode-340815" | sudo tee /etc/hostname
	I0108 20:44:10.172488   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:44:16.252503   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:44:19.324397   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:44:25.404420   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:44:28.476496   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:44:34.556414   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:44:37.628389   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:44:43.708386   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:44:46.780408   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:44:52.860349   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:44:55.932387   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:02.012394   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:05.084429   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:11.164402   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:14.240387   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:20.316388   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:23.388459   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:29.468418   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:32.540469   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:38.620428   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:41.692364   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:47.772463   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:50.844458   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:56.924434   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:45:59.996537   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:06.076417   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:09.148471   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:15.228422   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:18.300371   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:24.380435   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:27.452458   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:33.532463   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:36.604463   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:42.684401   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:45.756540   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:51.836400   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:46:54.908392   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:00.988398   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:04.060433   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:10.140426   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:13.212479   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:19.292391   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:22.364331   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:28.444395   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:31.516583   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:37.596344   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:40.668375   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:46.748429   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:49.820426   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:55.900367   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:47:58.972502   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:48:05.052441   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:48:08.124368   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:48:14.204361   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:48:17.276469   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:48:23.356393   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:48:26.428379   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:48:32.508356   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:48:35.580396   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:48:41.660385   35097 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.196:22: connect: no route to host
	I0108 20:48:44.663151   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:48:44.663238   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:48:44.665273   35097 machine.go:91] provisioned docker machine in 4m53.035217781s
	I0108 20:48:44.665316   35097 fix.go:56] fixHost completed within 4m53.056361573s
	I0108 20:48:44.665323   35097 start.go:83] releasing machines lock for "multinode-340815", held for 4m53.056378189s
	W0108 20:48:44.665354   35097 start.go:694] error starting host: provision: host is not running
	W0108 20:48:44.665436   35097 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0108 20:48:44.665449   35097 start.go:709] Will try again in 5 seconds ...
	I0108 20:48:49.666336   35097 start.go:365] acquiring machines lock for multinode-340815: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 20:48:49.666445   35097 start.go:369] acquired machines lock for "multinode-340815" in 69.392µs
	I0108 20:48:49.666465   35097 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:48:49.666472   35097 fix.go:54] fixHost starting: 
	I0108 20:48:49.666772   35097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:48:49.666794   35097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:48:49.681565   35097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40191
	I0108 20:48:49.682030   35097 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:48:49.682479   35097 main.go:141] libmachine: Using API Version  1
	I0108 20:48:49.682500   35097 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:48:49.682829   35097 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:48:49.683015   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:48:49.683178   35097 main.go:141] libmachine: (multinode-340815) Calling .GetState
	I0108 20:48:49.684911   35097 fix.go:102] recreateIfNeeded on multinode-340815: state=Stopped err=<nil>
	I0108 20:48:49.684934   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	W0108 20:48:49.685091   35097 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:48:49.687380   35097 out.go:177] * Restarting existing kvm2 VM for "multinode-340815" ...
	I0108 20:48:49.689010   35097 main.go:141] libmachine: (multinode-340815) Calling .Start
	I0108 20:48:49.689197   35097 main.go:141] libmachine: (multinode-340815) Ensuring networks are active...
	I0108 20:48:49.690015   35097 main.go:141] libmachine: (multinode-340815) Ensuring network default is active
	I0108 20:48:49.690362   35097 main.go:141] libmachine: (multinode-340815) Ensuring network mk-multinode-340815 is active
	I0108 20:48:49.690684   35097 main.go:141] libmachine: (multinode-340815) Getting domain xml...
	I0108 20:48:49.691500   35097 main.go:141] libmachine: (multinode-340815) Creating domain...
	I0108 20:48:50.949982   35097 main.go:141] libmachine: (multinode-340815) Waiting to get IP...
	I0108 20:48:50.951159   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:48:50.951691   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:48:50.951757   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:48:50.951666   35858 retry.go:31] will retry after 300.073592ms: waiting for machine to come up
	I0108 20:48:51.253266   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:48:51.253725   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:48:51.253766   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:48:51.253671   35858 retry.go:31] will retry after 320.625434ms: waiting for machine to come up
	I0108 20:48:51.576409   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:48:51.577003   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:48:51.577027   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:48:51.576952   35858 retry.go:31] will retry after 424.923247ms: waiting for machine to come up
	I0108 20:48:52.003544   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:48:52.004078   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:48:52.004118   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:48:52.004035   35858 retry.go:31] will retry after 506.192196ms: waiting for machine to come up
	I0108 20:48:52.511899   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:48:52.512383   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:48:52.512408   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:48:52.512338   35858 retry.go:31] will retry after 529.403277ms: waiting for machine to come up
	I0108 20:48:53.043341   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:48:53.043854   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:48:53.043890   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:48:53.043806   35858 retry.go:31] will retry after 812.981845ms: waiting for machine to come up
	I0108 20:48:53.858941   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:48:53.859454   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:48:53.859488   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:48:53.859405   35858 retry.go:31] will retry after 782.644671ms: waiting for machine to come up
	I0108 20:48:54.643689   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:48:54.644278   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:48:54.644308   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:48:54.644220   35858 retry.go:31] will retry after 1.358142202s: waiting for machine to come up
	I0108 20:48:56.004241   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:48:56.004708   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:48:56.004736   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:48:56.004677   35858 retry.go:31] will retry after 1.527066865s: waiting for machine to come up
	I0108 20:48:57.532998   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:48:57.533430   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:48:57.533460   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:48:57.533373   35858 retry.go:31] will retry after 2.23532759s: waiting for machine to come up
	I0108 20:48:59.770395   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:48:59.770850   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:48:59.770884   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:48:59.770829   35858 retry.go:31] will retry after 2.275311759s: waiting for machine to come up
	I0108 20:49:02.049484   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:02.049975   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:49:02.050004   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:49:02.049922   35858 retry.go:31] will retry after 2.9407376s: waiting for machine to come up
	I0108 20:49:04.992719   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:04.993130   35097 main.go:141] libmachine: (multinode-340815) DBG | unable to find current IP address of domain multinode-340815 in network mk-multinode-340815
	I0108 20:49:04.993165   35097 main.go:141] libmachine: (multinode-340815) DBG | I0108 20:49:04.993077   35858 retry.go:31] will retry after 4.060782482s: waiting for machine to come up
	I0108 20:49:09.058382   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.058866   35097 main.go:141] libmachine: (multinode-340815) Found IP for machine: 192.168.39.196
	I0108 20:49:09.058887   35097 main.go:141] libmachine: (multinode-340815) Reserving static IP address...
	I0108 20:49:09.058905   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has current primary IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.059423   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "multinode-340815", mac: "52:54:00:06:a0:1e", ip: "192.168.39.196"} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:09.059462   35097 main.go:141] libmachine: (multinode-340815) DBG | skip adding static IP to network mk-multinode-340815 - found existing host DHCP lease matching {name: "multinode-340815", mac: "52:54:00:06:a0:1e", ip: "192.168.39.196"}
	I0108 20:49:09.059472   35097 main.go:141] libmachine: (multinode-340815) Reserved static IP address: 192.168.39.196
	I0108 20:49:09.059482   35097 main.go:141] libmachine: (multinode-340815) Waiting for SSH to be available...
	I0108 20:49:09.059493   35097 main.go:141] libmachine: (multinode-340815) DBG | Getting to WaitForSSH function...
	I0108 20:49:09.062036   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.062513   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:09.062539   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.062727   35097 main.go:141] libmachine: (multinode-340815) DBG | Using SSH client type: external
	I0108 20:49:09.062750   35097 main.go:141] libmachine: (multinode-340815) DBG | Using SSH private key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa (-rw-------)
	I0108 20:49:09.062804   35097 main.go:141] libmachine: (multinode-340815) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 20:49:09.062824   35097 main.go:141] libmachine: (multinode-340815) DBG | About to run SSH command:
	I0108 20:49:09.062843   35097 main.go:141] libmachine: (multinode-340815) DBG | exit 0
	I0108 20:49:09.161190   35097 main.go:141] libmachine: (multinode-340815) DBG | SSH cmd err, output: <nil>: 
	I0108 20:49:09.161628   35097 main.go:141] libmachine: (multinode-340815) Calling .GetConfigRaw
	I0108 20:49:09.162369   35097 main.go:141] libmachine: (multinode-340815) Calling .GetIP
	I0108 20:49:09.165313   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.165832   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:09.165859   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.166171   35097 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json ...
	I0108 20:49:09.166407   35097 machine.go:88] provisioning docker machine ...
	I0108 20:49:09.166426   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:49:09.166705   35097 main.go:141] libmachine: (multinode-340815) Calling .GetMachineName
	I0108 20:49:09.166880   35097 buildroot.go:166] provisioning hostname "multinode-340815"
	I0108 20:49:09.166895   35097 main.go:141] libmachine: (multinode-340815) Calling .GetMachineName
	I0108 20:49:09.167075   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:49:09.169509   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.169887   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:09.169913   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.170167   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:49:09.170358   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:09.170560   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:09.170761   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:49:09.171011   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:09.171361   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0108 20:49:09.171376   35097 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-340815 && echo "multinode-340815" | sudo tee /etc/hostname
	I0108 20:49:09.317705   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-340815
	
	I0108 20:49:09.317732   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:49:09.320954   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.321304   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:09.321331   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.321524   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:49:09.321750   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:09.321901   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:09.322048   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:49:09.322222   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:09.322548   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0108 20:49:09.322578   35097 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-340815' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-340815/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-340815' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:49:09.465526   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:49:09.465560   35097 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 20:49:09.465598   35097 buildroot.go:174] setting up certificates
	I0108 20:49:09.465610   35097 provision.go:83] configureAuth start
	I0108 20:49:09.465623   35097 main.go:141] libmachine: (multinode-340815) Calling .GetMachineName
	I0108 20:49:09.465932   35097 main.go:141] libmachine: (multinode-340815) Calling .GetIP
	I0108 20:49:09.468449   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.468816   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:09.468847   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.469003   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:49:09.471265   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.471723   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:09.471753   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.471905   35097 provision.go:138] copyHostCerts
	I0108 20:49:09.471940   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:49:09.471988   35097 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 20:49:09.472000   35097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:49:09.472108   35097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 20:49:09.472269   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:49:09.472302   35097 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 20:49:09.472321   35097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:49:09.472370   35097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 20:49:09.472443   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:49:09.472468   35097 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 20:49:09.472477   35097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:49:09.472511   35097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 20:49:09.472571   35097 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.multinode-340815 san=[192.168.39.196 192.168.39.196 localhost 127.0.0.1 minikube multinode-340815]
	I0108 20:49:09.604592   35097 provision.go:172] copyRemoteCerts
	I0108 20:49:09.604662   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:49:09.604692   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:49:09.607550   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.607914   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:09.607942   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.608162   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:49:09.608346   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:09.608492   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:49:09.608717   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:49:09.706277   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:49:09.706347   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:49:09.734236   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:49:09.734314   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 20:49:09.759965   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:49:09.760042   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 20:49:09.784876   35097 provision.go:86] duration metric: configureAuth took 319.251842ms
	I0108 20:49:09.784902   35097 buildroot.go:189] setting minikube options for container-runtime
	I0108 20:49:09.785142   35097 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:49:09.785224   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:49:09.788275   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.788752   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:09.788787   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:09.788938   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:49:09.789161   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:09.789320   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:09.789422   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:49:09.789563   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:09.789933   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0108 20:49:09.789951   35097 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:49:10.133571   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:49:10.133605   35097 machine.go:91] provisioned docker machine in 967.183193ms
	I0108 20:49:10.133622   35097 start.go:300] post-start starting for "multinode-340815" (driver="kvm2")
	I0108 20:49:10.133636   35097 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:49:10.133656   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:49:10.133947   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:49:10.133981   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:49:10.137295   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:10.137772   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:10.137813   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:10.138063   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:49:10.138290   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:10.138532   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:49:10.138731   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:49:10.239790   35097 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:49:10.245023   35097 command_runner.go:130] > NAME=Buildroot
	I0108 20:49:10.245049   35097 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 20:49:10.245054   35097 command_runner.go:130] > ID=buildroot
	I0108 20:49:10.245060   35097 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 20:49:10.245070   35097 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 20:49:10.245121   35097 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 20:49:10.245148   35097 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 20:49:10.245233   35097 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 20:49:10.245321   35097 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 20:49:10.245331   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /etc/ssl/certs/178962.pem
	I0108 20:49:10.245410   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:49:10.257632   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:49:10.283597   35097 start.go:303] post-start completed in 149.961291ms
	I0108 20:49:10.283618   35097 fix.go:56] fixHost completed within 20.617145287s
	I0108 20:49:10.283639   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:49:10.286477   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:10.287007   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:10.287047   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:10.287210   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:49:10.287470   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:10.287690   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:10.287914   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:49:10.288139   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:10.288578   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0108 20:49:10.288600   35097 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 20:49:10.421107   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704746950.369312368
	
	I0108 20:49:10.421140   35097 fix.go:206] guest clock: 1704746950.369312368
	I0108 20:49:10.421147   35097 fix.go:219] Guest: 2024-01-08 20:49:10.369312368 +0000 UTC Remote: 2024-01-08 20:49:10.283622467 +0000 UTC m=+318.821100001 (delta=85.689901ms)
	I0108 20:49:10.421165   35097 fix.go:190] guest clock delta is within tolerance: 85.689901ms
	I0108 20:49:10.421169   35097 start.go:83] releasing machines lock for "multinode-340815", held for 20.754716672s
	I0108 20:49:10.421187   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:49:10.421451   35097 main.go:141] libmachine: (multinode-340815) Calling .GetIP
	I0108 20:49:10.424055   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:10.424470   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:10.424501   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:10.424657   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:49:10.425210   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:49:10.425389   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:49:10.425470   35097 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:49:10.425525   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:49:10.425591   35097 ssh_runner.go:195] Run: cat /version.json
	I0108 20:49:10.425620   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:49:10.428665   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:10.428708   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:10.429228   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:10.429263   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:10.429289   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:10.429308   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:10.429466   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:49:10.429595   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:49:10.429688   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:10.429770   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:49:10.429871   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:49:10.429946   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:49:10.430017   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:49:10.430225   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:49:10.543846   35097 command_runner.go:130] > {"iso_version": "v1.32.1-1702708929-17806", "kicbase_version": "v0.0.42-1702660877-17806", "minikube_version": "v1.32.0", "commit": "957da21b08687cca2533dd65b67e68ead277b79e"}
	I0108 20:49:10.543937   35097 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 20:49:10.543981   35097 ssh_runner.go:195] Run: systemctl --version
	I0108 20:49:10.550362   35097 command_runner.go:130] > systemd 247 (247)
	I0108 20:49:10.550396   35097 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0108 20:49:10.550442   35097 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:49:10.693335   35097 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:49:10.699414   35097 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 20:49:10.699628   35097 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 20:49:10.699694   35097 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:49:10.717239   35097 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 20:49:10.717294   35097 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 20:49:10.717302   35097 start.go:475] detecting cgroup driver to use...
	I0108 20:49:10.717377   35097 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:49:10.731465   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:49:10.744285   35097 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:49:10.744348   35097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:49:10.757308   35097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:49:10.770321   35097 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:49:10.878873   35097 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0108 20:49:10.878956   35097 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:49:10.896199   35097 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0108 20:49:11.003600   35097 docker.go:233] disabling docker service ...
	I0108 20:49:11.003677   35097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:49:11.018149   35097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:49:11.030206   35097 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0108 20:49:11.030327   35097 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:49:11.131092   35097 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0108 20:49:11.131186   35097 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:49:11.145022   35097 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0108 20:49:11.145374   35097 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0108 20:49:11.232825   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:49:11.245323   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:49:11.265468   35097 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 20:49:11.265509   35097 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:49:11.265570   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:49:11.275776   35097 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:49:11.275855   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:49:11.285676   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:49:11.295719   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:49:11.305415   35097 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:49:11.315463   35097 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:49:11.323674   35097 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 20:49:11.323872   35097 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 20:49:11.323929   35097 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 20:49:11.335914   35097 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:49:11.344602   35097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:49:11.457959   35097 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:49:11.647171   35097 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:49:11.647273   35097 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:49:11.653630   35097 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 20:49:11.653657   35097 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 20:49:11.653668   35097 command_runner.go:130] > Device: 16h/22d	Inode: 732         Links: 1
	I0108 20:49:11.653679   35097 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:49:11.653687   35097 command_runner.go:130] > Access: 2024-01-08 20:49:11.581432026 +0000
	I0108 20:49:11.653693   35097 command_runner.go:130] > Modify: 2024-01-08 20:49:11.581432026 +0000
	I0108 20:49:11.653698   35097 command_runner.go:130] > Change: 2024-01-08 20:49:11.581432026 +0000
	I0108 20:49:11.653702   35097 command_runner.go:130] >  Birth: -
	I0108 20:49:11.653721   35097 start.go:543] Will wait 60s for crictl version
	I0108 20:49:11.653770   35097 ssh_runner.go:195] Run: which crictl
	I0108 20:49:11.657689   35097 command_runner.go:130] > /usr/bin/crictl
	I0108 20:49:11.657744   35097 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:49:11.699613   35097 command_runner.go:130] > Version:  0.1.0
	I0108 20:49:11.699637   35097 command_runner.go:130] > RuntimeName:  cri-o
	I0108 20:49:11.699644   35097 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 20:49:11.699654   35097 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 20:49:11.699677   35097 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 20:49:11.699753   35097 ssh_runner.go:195] Run: crio --version
	I0108 20:49:11.754683   35097 command_runner.go:130] > crio version 1.24.1
	I0108 20:49:11.754704   35097 command_runner.go:130] > Version:          1.24.1
	I0108 20:49:11.754711   35097 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 20:49:11.754715   35097 command_runner.go:130] > GitTreeState:     dirty
	I0108 20:49:11.754721   35097 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 20:49:11.754725   35097 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 20:49:11.754729   35097 command_runner.go:130] > Compiler:         gc
	I0108 20:49:11.754734   35097 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:49:11.754749   35097 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:49:11.754767   35097 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:49:11.754774   35097 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:49:11.754780   35097 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:49:11.756043   35097 ssh_runner.go:195] Run: crio --version
	I0108 20:49:11.808133   35097 command_runner.go:130] > crio version 1.24.1
	I0108 20:49:11.808159   35097 command_runner.go:130] > Version:          1.24.1
	I0108 20:49:11.808191   35097 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 20:49:11.808198   35097 command_runner.go:130] > GitTreeState:     dirty
	I0108 20:49:11.808210   35097 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 20:49:11.808216   35097 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 20:49:11.808222   35097 command_runner.go:130] > Compiler:         gc
	I0108 20:49:11.808229   35097 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:49:11.808237   35097 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:49:11.808249   35097 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:49:11.808267   35097 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:49:11.808274   35097 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:49:11.811809   35097 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 20:49:11.813538   35097 main.go:141] libmachine: (multinode-340815) Calling .GetIP
	I0108 20:49:11.816516   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:11.816860   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:49:11.816890   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:49:11.817095   35097 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 20:49:11.821621   35097 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:49:11.835392   35097 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:49:11.835441   35097 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:49:11.876763   35097 command_runner.go:130] > {
	I0108 20:49:11.876784   35097 command_runner.go:130] >   "images": [
	I0108 20:49:11.876792   35097 command_runner.go:130] >     {
	I0108 20:49:11.876800   35097 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 20:49:11.876805   35097 command_runner.go:130] >       "repoTags": [
	I0108 20:49:11.876813   35097 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 20:49:11.876820   35097 command_runner.go:130] >       ],
	I0108 20:49:11.876826   35097 command_runner.go:130] >       "repoDigests": [
	I0108 20:49:11.876839   35097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 20:49:11.876849   35097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 20:49:11.876857   35097 command_runner.go:130] >       ],
	I0108 20:49:11.876868   35097 command_runner.go:130] >       "size": "750414",
	I0108 20:49:11.876874   35097 command_runner.go:130] >       "uid": {
	I0108 20:49:11.876881   35097 command_runner.go:130] >         "value": "65535"
	I0108 20:49:11.876887   35097 command_runner.go:130] >       },
	I0108 20:49:11.876892   35097 command_runner.go:130] >       "username": "",
	I0108 20:49:11.876932   35097 command_runner.go:130] >       "spec": null,
	I0108 20:49:11.876946   35097 command_runner.go:130] >       "pinned": false
	I0108 20:49:11.876952   35097 command_runner.go:130] >     }
	I0108 20:49:11.876959   35097 command_runner.go:130] >   ]
	I0108 20:49:11.876971   35097 command_runner.go:130] > }
	I0108 20:49:11.877092   35097 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 20:49:11.877147   35097 ssh_runner.go:195] Run: which lz4
	I0108 20:49:11.881325   35097 command_runner.go:130] > /usr/bin/lz4
	I0108 20:49:11.881356   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 20:49:11.881463   35097 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 20:49:11.885445   35097 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 20:49:11.885644   35097 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 20:49:11.885670   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 20:49:13.660672   35097 crio.go:444] Took 1.779257 seconds to copy over tarball
	I0108 20:49:13.660745   35097 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 20:49:16.731834   35097 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.071064796s)
	I0108 20:49:16.731857   35097 crio.go:451] Took 3.071159 seconds to extract the tarball
	I0108 20:49:16.731871   35097 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 20:49:16.773008   35097 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 20:49:16.821306   35097 command_runner.go:130] > {
	I0108 20:49:16.821327   35097 command_runner.go:130] >   "images": [
	I0108 20:49:16.821333   35097 command_runner.go:130] >     {
	I0108 20:49:16.821344   35097 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0108 20:49:16.821350   35097 command_runner.go:130] >       "repoTags": [
	I0108 20:49:16.821374   35097 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0108 20:49:16.821382   35097 command_runner.go:130] >       ],
	I0108 20:49:16.821390   35097 command_runner.go:130] >       "repoDigests": [
	I0108 20:49:16.821405   35097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0108 20:49:16.821421   35097 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0108 20:49:16.821428   35097 command_runner.go:130] >       ],
	I0108 20:49:16.821437   35097 command_runner.go:130] >       "size": "65258016",
	I0108 20:49:16.821447   35097 command_runner.go:130] >       "uid": null,
	I0108 20:49:16.821455   35097 command_runner.go:130] >       "username": "",
	I0108 20:49:16.821472   35097 command_runner.go:130] >       "spec": null,
	I0108 20:49:16.821483   35097 command_runner.go:130] >       "pinned": false
	I0108 20:49:16.821499   35097 command_runner.go:130] >     },
	I0108 20:49:16.821509   35097 command_runner.go:130] >     {
	I0108 20:49:16.821522   35097 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0108 20:49:16.821533   35097 command_runner.go:130] >       "repoTags": [
	I0108 20:49:16.821545   35097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0108 20:49:16.821552   35097 command_runner.go:130] >       ],
	I0108 20:49:16.821562   35097 command_runner.go:130] >       "repoDigests": [
	I0108 20:49:16.821579   35097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0108 20:49:16.821596   35097 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0108 20:49:16.821606   35097 command_runner.go:130] >       ],
	I0108 20:49:16.821620   35097 command_runner.go:130] >       "size": "31470524",
	I0108 20:49:16.821630   35097 command_runner.go:130] >       "uid": null,
	I0108 20:49:16.821640   35097 command_runner.go:130] >       "username": "",
	I0108 20:49:16.821650   35097 command_runner.go:130] >       "spec": null,
	I0108 20:49:16.821661   35097 command_runner.go:130] >       "pinned": false
	I0108 20:49:16.821670   35097 command_runner.go:130] >     },
	I0108 20:49:16.821677   35097 command_runner.go:130] >     {
	I0108 20:49:16.821692   35097 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0108 20:49:16.821705   35097 command_runner.go:130] >       "repoTags": [
	I0108 20:49:16.821718   35097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0108 20:49:16.821725   35097 command_runner.go:130] >       ],
	I0108 20:49:16.821734   35097 command_runner.go:130] >       "repoDigests": [
	I0108 20:49:16.821750   35097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0108 20:49:16.821766   35097 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0108 20:49:16.821776   35097 command_runner.go:130] >       ],
	I0108 20:49:16.821785   35097 command_runner.go:130] >       "size": "53621675",
	I0108 20:49:16.821796   35097 command_runner.go:130] >       "uid": null,
	I0108 20:49:16.821806   35097 command_runner.go:130] >       "username": "",
	I0108 20:49:16.821815   35097 command_runner.go:130] >       "spec": null,
	I0108 20:49:16.821826   35097 command_runner.go:130] >       "pinned": false
	I0108 20:49:16.821833   35097 command_runner.go:130] >     },
	I0108 20:49:16.821843   35097 command_runner.go:130] >     {
	I0108 20:49:16.821856   35097 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0108 20:49:16.821864   35097 command_runner.go:130] >       "repoTags": [
	I0108 20:49:16.821876   35097 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0108 20:49:16.821884   35097 command_runner.go:130] >       ],
	I0108 20:49:16.821897   35097 command_runner.go:130] >       "repoDigests": [
	I0108 20:49:16.821913   35097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0108 20:49:16.821928   35097 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0108 20:49:16.821945   35097 command_runner.go:130] >       ],
	I0108 20:49:16.821956   35097 command_runner.go:130] >       "size": "295456551",
	I0108 20:49:16.821964   35097 command_runner.go:130] >       "uid": {
	I0108 20:49:16.821975   35097 command_runner.go:130] >         "value": "0"
	I0108 20:49:16.821984   35097 command_runner.go:130] >       },
	I0108 20:49:16.821993   35097 command_runner.go:130] >       "username": "",
	I0108 20:49:16.822004   35097 command_runner.go:130] >       "spec": null,
	I0108 20:49:16.822014   35097 command_runner.go:130] >       "pinned": false
	I0108 20:49:16.822021   35097 command_runner.go:130] >     },
	I0108 20:49:16.822030   35097 command_runner.go:130] >     {
	I0108 20:49:16.822041   35097 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0108 20:49:16.822052   35097 command_runner.go:130] >       "repoTags": [
	I0108 20:49:16.822062   35097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0108 20:49:16.822072   35097 command_runner.go:130] >       ],
	I0108 20:49:16.822083   35097 command_runner.go:130] >       "repoDigests": [
	I0108 20:49:16.822101   35097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0108 20:49:16.822117   35097 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0108 20:49:16.822127   35097 command_runner.go:130] >       ],
	I0108 20:49:16.822136   35097 command_runner.go:130] >       "size": "127226832",
	I0108 20:49:16.822146   35097 command_runner.go:130] >       "uid": {
	I0108 20:49:16.822157   35097 command_runner.go:130] >         "value": "0"
	I0108 20:49:16.822164   35097 command_runner.go:130] >       },
	I0108 20:49:16.822174   35097 command_runner.go:130] >       "username": "",
	I0108 20:49:16.822184   35097 command_runner.go:130] >       "spec": null,
	I0108 20:49:16.822192   35097 command_runner.go:130] >       "pinned": false
	I0108 20:49:16.822201   35097 command_runner.go:130] >     },
	I0108 20:49:16.822208   35097 command_runner.go:130] >     {
	I0108 20:49:16.822223   35097 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0108 20:49:16.822234   35097 command_runner.go:130] >       "repoTags": [
	I0108 20:49:16.822246   35097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0108 20:49:16.822256   35097 command_runner.go:130] >       ],
	I0108 20:49:16.822265   35097 command_runner.go:130] >       "repoDigests": [
	I0108 20:49:16.822281   35097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0108 20:49:16.822301   35097 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0108 20:49:16.822311   35097 command_runner.go:130] >       ],
	I0108 20:49:16.822322   35097 command_runner.go:130] >       "size": "123261750",
	I0108 20:49:16.822331   35097 command_runner.go:130] >       "uid": {
	I0108 20:49:16.822340   35097 command_runner.go:130] >         "value": "0"
	I0108 20:49:16.822349   35097 command_runner.go:130] >       },
	I0108 20:49:16.822365   35097 command_runner.go:130] >       "username": "",
	I0108 20:49:16.822373   35097 command_runner.go:130] >       "spec": null,
	I0108 20:49:16.822383   35097 command_runner.go:130] >       "pinned": false
	I0108 20:49:16.822397   35097 command_runner.go:130] >     },
	I0108 20:49:16.822406   35097 command_runner.go:130] >     {
	I0108 20:49:16.822418   35097 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0108 20:49:16.822428   35097 command_runner.go:130] >       "repoTags": [
	I0108 20:49:16.822438   35097 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0108 20:49:16.822447   35097 command_runner.go:130] >       ],
	I0108 20:49:16.822456   35097 command_runner.go:130] >       "repoDigests": [
	I0108 20:49:16.822472   35097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0108 20:49:16.822487   35097 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0108 20:49:16.822500   35097 command_runner.go:130] >       ],
	I0108 20:49:16.822511   35097 command_runner.go:130] >       "size": "74749335",
	I0108 20:49:16.822521   35097 command_runner.go:130] >       "uid": null,
	I0108 20:49:16.822531   35097 command_runner.go:130] >       "username": "",
	I0108 20:49:16.822539   35097 command_runner.go:130] >       "spec": null,
	I0108 20:49:16.822550   35097 command_runner.go:130] >       "pinned": false
	I0108 20:49:16.822557   35097 command_runner.go:130] >     },
	I0108 20:49:16.822566   35097 command_runner.go:130] >     {
	I0108 20:49:16.822578   35097 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0108 20:49:16.822588   35097 command_runner.go:130] >       "repoTags": [
	I0108 20:49:16.822600   35097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0108 20:49:16.822610   35097 command_runner.go:130] >       ],
	I0108 20:49:16.822619   35097 command_runner.go:130] >       "repoDigests": [
	I0108 20:49:16.822649   35097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0108 20:49:16.822665   35097 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0108 20:49:16.822672   35097 command_runner.go:130] >       ],
	I0108 20:49:16.822683   35097 command_runner.go:130] >       "size": "61551410",
	I0108 20:49:16.822692   35097 command_runner.go:130] >       "uid": {
	I0108 20:49:16.822711   35097 command_runner.go:130] >         "value": "0"
	I0108 20:49:16.822721   35097 command_runner.go:130] >       },
	I0108 20:49:16.822729   35097 command_runner.go:130] >       "username": "",
	I0108 20:49:16.822739   35097 command_runner.go:130] >       "spec": null,
	I0108 20:49:16.822750   35097 command_runner.go:130] >       "pinned": false
	I0108 20:49:16.822757   35097 command_runner.go:130] >     },
	I0108 20:49:16.822765   35097 command_runner.go:130] >     {
	I0108 20:49:16.822779   35097 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0108 20:49:16.822789   35097 command_runner.go:130] >       "repoTags": [
	I0108 20:49:16.822800   35097 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0108 20:49:16.822808   35097 command_runner.go:130] >       ],
	I0108 20:49:16.822819   35097 command_runner.go:130] >       "repoDigests": [
	I0108 20:49:16.822834   35097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0108 20:49:16.822850   35097 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0108 20:49:16.822859   35097 command_runner.go:130] >       ],
	I0108 20:49:16.822866   35097 command_runner.go:130] >       "size": "750414",
	I0108 20:49:16.822873   35097 command_runner.go:130] >       "uid": {
	I0108 20:49:16.822884   35097 command_runner.go:130] >         "value": "65535"
	I0108 20:49:16.822895   35097 command_runner.go:130] >       },
	I0108 20:49:16.822906   35097 command_runner.go:130] >       "username": "",
	I0108 20:49:16.822916   35097 command_runner.go:130] >       "spec": null,
	I0108 20:49:16.822924   35097 command_runner.go:130] >       "pinned": false
	I0108 20:49:16.822933   35097 command_runner.go:130] >     }
	I0108 20:49:16.822941   35097 command_runner.go:130] >   ]
	I0108 20:49:16.822949   35097 command_runner.go:130] > }
	I0108 20:49:16.823063   35097 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 20:49:16.823075   35097 cache_images.go:84] Images are preloaded, skipping loading
	I0108 20:49:16.823146   35097 ssh_runner.go:195] Run: crio config
	I0108 20:49:16.874368   35097 command_runner.go:130] ! time="2024-01-08 20:49:16.822279419Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 20:49:16.874411   35097 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 20:49:16.878937   35097 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 20:49:16.878957   35097 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 20:49:16.878963   35097 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 20:49:16.878967   35097 command_runner.go:130] > #
	I0108 20:49:16.878973   35097 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 20:49:16.878979   35097 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 20:49:16.878985   35097 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 20:49:16.878995   35097 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 20:49:16.879005   35097 command_runner.go:130] > # reload'.
	I0108 20:49:16.879011   35097 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 20:49:16.879020   35097 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 20:49:16.879026   35097 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 20:49:16.879034   35097 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 20:49:16.879040   35097 command_runner.go:130] > [crio]
	I0108 20:49:16.879049   35097 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 20:49:16.879057   35097 command_runner.go:130] > # containers images, in this directory.
	I0108 20:49:16.879067   35097 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 20:49:16.879084   35097 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 20:49:16.879094   35097 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 20:49:16.879102   35097 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 20:49:16.879115   35097 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 20:49:16.879124   35097 command_runner.go:130] > storage_driver = "overlay"
	I0108 20:49:16.879135   35097 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 20:49:16.879146   35097 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 20:49:16.879156   35097 command_runner.go:130] > storage_option = [
	I0108 20:49:16.879167   35097 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 20:49:16.879174   35097 command_runner.go:130] > ]
	I0108 20:49:16.879181   35097 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 20:49:16.879193   35097 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 20:49:16.879203   35097 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 20:49:16.879213   35097 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 20:49:16.879231   35097 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 20:49:16.879238   35097 command_runner.go:130] > # always happen on a node reboot
	I0108 20:49:16.879244   35097 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 20:49:16.879252   35097 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 20:49:16.879260   35097 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 20:49:16.879272   35097 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 20:49:16.879280   35097 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 20:49:16.879287   35097 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 20:49:16.879297   35097 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 20:49:16.879301   35097 command_runner.go:130] > # internal_wipe = true
	I0108 20:49:16.879307   35097 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 20:49:16.879313   35097 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 20:49:16.879320   35097 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 20:49:16.879326   35097 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 20:49:16.879334   35097 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 20:49:16.879338   35097 command_runner.go:130] > [crio.api]
	I0108 20:49:16.879345   35097 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 20:49:16.879350   35097 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 20:49:16.879367   35097 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 20:49:16.879372   35097 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 20:49:16.879378   35097 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 20:49:16.879386   35097 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 20:49:16.879390   35097 command_runner.go:130] > # stream_port = "0"
	I0108 20:49:16.879397   35097 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 20:49:16.879402   35097 command_runner.go:130] > # stream_enable_tls = false
	I0108 20:49:16.879410   35097 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 20:49:16.879414   35097 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 20:49:16.879423   35097 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 20:49:16.879429   35097 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 20:49:16.879436   35097 command_runner.go:130] > # minutes.
	I0108 20:49:16.879439   35097 command_runner.go:130] > # stream_tls_cert = ""
	I0108 20:49:16.879450   35097 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 20:49:16.879456   35097 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 20:49:16.879463   35097 command_runner.go:130] > # stream_tls_key = ""
	I0108 20:49:16.879468   35097 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 20:49:16.879477   35097 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 20:49:16.879485   35097 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 20:49:16.879491   35097 command_runner.go:130] > # stream_tls_ca = ""
	I0108 20:49:16.879498   35097 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:49:16.879505   35097 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 20:49:16.879512   35097 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:49:16.879518   35097 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 20:49:16.879536   35097 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 20:49:16.879544   35097 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 20:49:16.879548   35097 command_runner.go:130] > [crio.runtime]
	I0108 20:49:16.879554   35097 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 20:49:16.879561   35097 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 20:49:16.879566   35097 command_runner.go:130] > # "nofile=1024:2048"
	I0108 20:49:16.879572   35097 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 20:49:16.879576   35097 command_runner.go:130] > # default_ulimits = [
	I0108 20:49:16.879582   35097 command_runner.go:130] > # ]
	I0108 20:49:16.879588   35097 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 20:49:16.879594   35097 command_runner.go:130] > # no_pivot = false
	I0108 20:49:16.879600   35097 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 20:49:16.879610   35097 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 20:49:16.879617   35097 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 20:49:16.879623   35097 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 20:49:16.879630   35097 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 20:49:16.879636   35097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:49:16.879643   35097 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 20:49:16.879647   35097 command_runner.go:130] > # Cgroup setting for conmon
	I0108 20:49:16.879656   35097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 20:49:16.879661   35097 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 20:49:16.879669   35097 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 20:49:16.879674   35097 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 20:49:16.879681   35097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:49:16.879688   35097 command_runner.go:130] > conmon_env = [
	I0108 20:49:16.879694   35097 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 20:49:16.879699   35097 command_runner.go:130] > ]
	I0108 20:49:16.879704   35097 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 20:49:16.879711   35097 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 20:49:16.879717   35097 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 20:49:16.879727   35097 command_runner.go:130] > # default_env = [
	I0108 20:49:16.879731   35097 command_runner.go:130] > # ]
	I0108 20:49:16.879736   35097 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 20:49:16.879743   35097 command_runner.go:130] > # selinux = false
	I0108 20:49:16.879749   35097 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 20:49:16.879757   35097 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 20:49:16.879763   35097 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 20:49:16.879769   35097 command_runner.go:130] > # seccomp_profile = ""
	I0108 20:49:16.879775   35097 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 20:49:16.879782   35097 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 20:49:16.879789   35097 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 20:49:16.879795   35097 command_runner.go:130] > # which might increase security.
	I0108 20:49:16.879803   35097 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 20:49:16.879811   35097 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 20:49:16.879819   35097 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 20:49:16.879827   35097 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 20:49:16.879833   35097 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 20:49:16.879840   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:49:16.879847   35097 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 20:49:16.879852   35097 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 20:49:16.879857   35097 command_runner.go:130] > # the cgroup blockio controller.
	I0108 20:49:16.879862   35097 command_runner.go:130] > # blockio_config_file = ""
	I0108 20:49:16.879868   35097 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 20:49:16.879874   35097 command_runner.go:130] > # irqbalance daemon.
	I0108 20:49:16.879880   35097 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 20:49:16.879888   35097 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 20:49:16.879893   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:49:16.879899   35097 command_runner.go:130] > # rdt_config_file = ""
	I0108 20:49:16.879904   35097 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 20:49:16.879909   35097 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 20:49:16.879915   35097 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 20:49:16.879922   35097 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 20:49:16.879928   35097 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 20:49:16.879936   35097 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 20:49:16.879942   35097 command_runner.go:130] > # will be added.
	I0108 20:49:16.879947   35097 command_runner.go:130] > # default_capabilities = [
	I0108 20:49:16.879954   35097 command_runner.go:130] > # 	"CHOWN",
	I0108 20:49:16.879958   35097 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 20:49:16.879964   35097 command_runner.go:130] > # 	"FSETID",
	I0108 20:49:16.879968   35097 command_runner.go:130] > # 	"FOWNER",
	I0108 20:49:16.879973   35097 command_runner.go:130] > # 	"SETGID",
	I0108 20:49:16.879977   35097 command_runner.go:130] > # 	"SETUID",
	I0108 20:49:16.879984   35097 command_runner.go:130] > # 	"SETPCAP",
	I0108 20:49:16.879987   35097 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 20:49:16.879991   35097 command_runner.go:130] > # 	"KILL",
	I0108 20:49:16.879995   35097 command_runner.go:130] > # ]
	I0108 20:49:16.880001   35097 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 20:49:16.880009   35097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:49:16.880013   35097 command_runner.go:130] > # default_sysctls = [
	I0108 20:49:16.880017   35097 command_runner.go:130] > # ]
	I0108 20:49:16.880021   35097 command_runner.go:130] > # List of devices on the host that a
	I0108 20:49:16.880030   35097 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 20:49:16.880034   35097 command_runner.go:130] > # allowed_devices = [
	I0108 20:49:16.880041   35097 command_runner.go:130] > # 	"/dev/fuse",
	I0108 20:49:16.880046   35097 command_runner.go:130] > # ]
	I0108 20:49:16.880051   35097 command_runner.go:130] > # List of additional devices. specified as
	I0108 20:49:16.880060   35097 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 20:49:16.880066   35097 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 20:49:16.880106   35097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:49:16.880120   35097 command_runner.go:130] > # additional_devices = [
	I0108 20:49:16.880124   35097 command_runner.go:130] > # ]
	I0108 20:49:16.880129   35097 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 20:49:16.880135   35097 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 20:49:16.880139   35097 command_runner.go:130] > # 	"/etc/cdi",
	I0108 20:49:16.880146   35097 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 20:49:16.880149   35097 command_runner.go:130] > # ]
	I0108 20:49:16.880155   35097 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 20:49:16.880163   35097 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 20:49:16.880167   35097 command_runner.go:130] > # Defaults to false.
	I0108 20:49:16.880174   35097 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 20:49:16.880181   35097 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 20:49:16.880188   35097 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 20:49:16.880195   35097 command_runner.go:130] > # hooks_dir = [
	I0108 20:49:16.880202   35097 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 20:49:16.880206   35097 command_runner.go:130] > # ]
	I0108 20:49:16.880213   35097 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 20:49:16.880220   35097 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 20:49:16.880227   35097 command_runner.go:130] > # its default mounts from the following two files:
	I0108 20:49:16.880230   35097 command_runner.go:130] > #
	I0108 20:49:16.880236   35097 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 20:49:16.880244   35097 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 20:49:16.880250   35097 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 20:49:16.880256   35097 command_runner.go:130] > #
	I0108 20:49:16.880262   35097 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 20:49:16.880270   35097 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 20:49:16.880277   35097 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 20:49:16.880284   35097 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 20:49:16.880291   35097 command_runner.go:130] > #
	I0108 20:49:16.880298   35097 command_runner.go:130] > # default_mounts_file = ""
	I0108 20:49:16.880303   35097 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 20:49:16.880313   35097 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 20:49:16.880317   35097 command_runner.go:130] > pids_limit = 1024
	I0108 20:49:16.880323   35097 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 20:49:16.880331   35097 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 20:49:16.880338   35097 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 20:49:16.880348   35097 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 20:49:16.880357   35097 command_runner.go:130] > # log_size_max = -1
	I0108 20:49:16.880366   35097 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 20:49:16.880372   35097 command_runner.go:130] > # log_to_journald = false
	I0108 20:49:16.880378   35097 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 20:49:16.880385   35097 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 20:49:16.880390   35097 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 20:49:16.880396   35097 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 20:49:16.880401   35097 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 20:49:16.880406   35097 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 20:49:16.880411   35097 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 20:49:16.880415   35097 command_runner.go:130] > # read_only = false
	I0108 20:49:16.880422   35097 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 20:49:16.880431   35097 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 20:49:16.880438   35097 command_runner.go:130] > # live configuration reload.
	I0108 20:49:16.880442   35097 command_runner.go:130] > # log_level = "info"
	I0108 20:49:16.880450   35097 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 20:49:16.880455   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:49:16.880464   35097 command_runner.go:130] > # log_filter = ""
	I0108 20:49:16.880470   35097 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 20:49:16.880478   35097 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 20:49:16.880482   35097 command_runner.go:130] > # separated by comma.
	I0108 20:49:16.880488   35097 command_runner.go:130] > # uid_mappings = ""
	I0108 20:49:16.880494   35097 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 20:49:16.880500   35097 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 20:49:16.880504   35097 command_runner.go:130] > # separated by comma.
	I0108 20:49:16.880508   35097 command_runner.go:130] > # gid_mappings = ""
	I0108 20:49:16.880516   35097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 20:49:16.880522   35097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:49:16.880530   35097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:49:16.880535   35097 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 20:49:16.880543   35097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 20:49:16.880551   35097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:49:16.880557   35097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:49:16.880564   35097 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 20:49:16.880570   35097 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 20:49:16.880578   35097 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 20:49:16.880584   35097 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 20:49:16.880590   35097 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 20:49:16.880596   35097 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 20:49:16.880601   35097 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 20:49:16.880608   35097 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 20:49:16.880614   35097 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 20:49:16.880619   35097 command_runner.go:130] > drop_infra_ctr = false
	I0108 20:49:16.880625   35097 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 20:49:16.880635   35097 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 20:49:16.880644   35097 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 20:49:16.880648   35097 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 20:49:16.880654   35097 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 20:49:16.880663   35097 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 20:49:16.880670   35097 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 20:49:16.880677   35097 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 20:49:16.880684   35097 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 20:49:16.880690   35097 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 20:49:16.880698   35097 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 20:49:16.880706   35097 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 20:49:16.880713   35097 command_runner.go:130] > # default_runtime = "runc"
	I0108 20:49:16.880718   35097 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 20:49:16.880727   35097 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 20:49:16.880738   35097 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 20:49:16.880745   35097 command_runner.go:130] > # creation as a file is not desired either.
	I0108 20:49:16.880752   35097 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 20:49:16.880759   35097 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 20:49:16.880764   35097 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 20:49:16.880770   35097 command_runner.go:130] > # ]
	I0108 20:49:16.880776   35097 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 20:49:16.880784   35097 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 20:49:16.880792   35097 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 20:49:16.880801   35097 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 20:49:16.880805   35097 command_runner.go:130] > #
	I0108 20:49:16.880810   35097 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 20:49:16.880817   35097 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 20:49:16.880821   35097 command_runner.go:130] > #  runtime_type = "oci"
	I0108 20:49:16.880829   35097 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 20:49:16.880833   35097 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 20:49:16.880838   35097 command_runner.go:130] > #  allowed_annotations = []
	I0108 20:49:16.880842   35097 command_runner.go:130] > # Where:
	I0108 20:49:16.880847   35097 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 20:49:16.880853   35097 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 20:49:16.880861   35097 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 20:49:16.880867   35097 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 20:49:16.880873   35097 command_runner.go:130] > #   in $PATH.
	I0108 20:49:16.880879   35097 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 20:49:16.880884   35097 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 20:49:16.880890   35097 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 20:49:16.880896   35097 command_runner.go:130] > #   state.
	I0108 20:49:16.880905   35097 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 20:49:16.880911   35097 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 20:49:16.880920   35097 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 20:49:16.880925   35097 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 20:49:16.880933   35097 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 20:49:16.880941   35097 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 20:49:16.880948   35097 command_runner.go:130] > #   The currently recognized values are:
	I0108 20:49:16.880955   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 20:49:16.880964   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 20:49:16.880970   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 20:49:16.880976   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 20:49:16.880985   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 20:49:16.880993   35097 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 20:49:16.881001   35097 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 20:49:16.881007   35097 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 20:49:16.881014   35097 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 20:49:16.881019   35097 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 20:49:16.881028   35097 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 20:49:16.881032   35097 command_runner.go:130] > runtime_type = "oci"
	I0108 20:49:16.881038   35097 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 20:49:16.881043   35097 command_runner.go:130] > runtime_config_path = ""
	I0108 20:49:16.881046   35097 command_runner.go:130] > monitor_path = ""
	I0108 20:49:16.881052   35097 command_runner.go:130] > monitor_cgroup = ""
	I0108 20:49:16.881056   35097 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 20:49:16.881065   35097 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 20:49:16.881068   35097 command_runner.go:130] > # running containers
	I0108 20:49:16.881073   35097 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 20:49:16.881082   35097 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 20:49:16.881128   35097 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 20:49:16.881137   35097 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 20:49:16.881142   35097 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 20:49:16.881147   35097 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 20:49:16.881151   35097 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 20:49:16.881156   35097 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 20:49:16.881163   35097 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 20:49:16.881172   35097 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 20:49:16.881178   35097 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 20:49:16.881186   35097 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 20:49:16.881192   35097 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 20:49:16.881199   35097 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 20:49:16.881208   35097 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 20:49:16.881217   35097 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 20:49:16.881228   35097 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 20:49:16.881238   35097 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 20:49:16.881243   35097 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 20:49:16.881252   35097 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 20:49:16.881258   35097 command_runner.go:130] > # Example:
	I0108 20:49:16.881265   35097 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 20:49:16.881270   35097 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 20:49:16.881275   35097 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 20:49:16.881281   35097 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 20:49:16.881287   35097 command_runner.go:130] > # cpuset = 0
	I0108 20:49:16.881290   35097 command_runner.go:130] > # cpushares = "0-1"
	I0108 20:49:16.881296   35097 command_runner.go:130] > # Where:
	I0108 20:49:16.881300   35097 command_runner.go:130] > # The workload name is workload-type.
	I0108 20:49:16.881309   35097 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 20:49:16.881319   35097 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 20:49:16.881327   35097 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 20:49:16.881334   35097 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 20:49:16.881342   35097 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 20:49:16.881346   35097 command_runner.go:130] > # 
	I0108 20:49:16.881358   35097 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 20:49:16.881364   35097 command_runner.go:130] > #
	I0108 20:49:16.881370   35097 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 20:49:16.881378   35097 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 20:49:16.881384   35097 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 20:49:16.881393   35097 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 20:49:16.881399   35097 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 20:49:16.881403   35097 command_runner.go:130] > [crio.image]
	I0108 20:49:16.881409   35097 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 20:49:16.881414   35097 command_runner.go:130] > # default_transport = "docker://"
	I0108 20:49:16.881422   35097 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 20:49:16.881431   35097 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:49:16.881435   35097 command_runner.go:130] > # global_auth_file = ""
	I0108 20:49:16.881440   35097 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 20:49:16.881446   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:49:16.881451   35097 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 20:49:16.881460   35097 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 20:49:16.881468   35097 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:49:16.881473   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:49:16.881480   35097 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 20:49:16.881485   35097 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 20:49:16.881492   35097 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 20:49:16.881498   35097 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 20:49:16.881506   35097 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 20:49:16.881510   35097 command_runner.go:130] > # pause_command = "/pause"
	I0108 20:49:16.881518   35097 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 20:49:16.881527   35097 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 20:49:16.881533   35097 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 20:49:16.881540   35097 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 20:49:16.881545   35097 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 20:49:16.881549   35097 command_runner.go:130] > # signature_policy = ""
	I0108 20:49:16.881555   35097 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 20:49:16.881560   35097 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 20:49:16.881564   35097 command_runner.go:130] > # changing them here.
	I0108 20:49:16.881568   35097 command_runner.go:130] > # insecure_registries = [
	I0108 20:49:16.881571   35097 command_runner.go:130] > # ]
	I0108 20:49:16.881578   35097 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 20:49:16.881583   35097 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 20:49:16.881587   35097 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 20:49:16.881592   35097 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 20:49:16.881596   35097 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 20:49:16.881601   35097 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 20:49:16.881605   35097 command_runner.go:130] > # CNI plugins.
	I0108 20:49:16.881608   35097 command_runner.go:130] > [crio.network]
	I0108 20:49:16.881614   35097 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 20:49:16.881621   35097 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 20:49:16.881631   35097 command_runner.go:130] > # cni_default_network = ""
	I0108 20:49:16.881640   35097 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 20:49:16.881645   35097 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 20:49:16.881651   35097 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 20:49:16.881654   35097 command_runner.go:130] > # plugin_dirs = [
	I0108 20:49:16.881658   35097 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 20:49:16.881664   35097 command_runner.go:130] > # ]
	I0108 20:49:16.881669   35097 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 20:49:16.881675   35097 command_runner.go:130] > [crio.metrics]
	I0108 20:49:16.881682   35097 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 20:49:16.881689   35097 command_runner.go:130] > enable_metrics = true
	I0108 20:49:16.881693   35097 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 20:49:16.881701   35097 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 20:49:16.881707   35097 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 20:49:16.881715   35097 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 20:49:16.881721   35097 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 20:49:16.881727   35097 command_runner.go:130] > # metrics_collectors = [
	I0108 20:49:16.881731   35097 command_runner.go:130] > # 	"operations",
	I0108 20:49:16.881737   35097 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 20:49:16.881744   35097 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 20:49:16.881748   35097 command_runner.go:130] > # 	"operations_errors",
	I0108 20:49:16.881752   35097 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 20:49:16.881758   35097 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 20:49:16.881763   35097 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 20:49:16.881768   35097 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 20:49:16.881772   35097 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 20:49:16.881778   35097 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 20:49:16.881782   35097 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 20:49:16.881786   35097 command_runner.go:130] > # 	"containers_oom_total",
	I0108 20:49:16.881792   35097 command_runner.go:130] > # 	"containers_oom",
	I0108 20:49:16.881796   35097 command_runner.go:130] > # 	"processes_defunct",
	I0108 20:49:16.881801   35097 command_runner.go:130] > # 	"operations_total",
	I0108 20:49:16.881805   35097 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 20:49:16.881812   35097 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 20:49:16.881816   35097 command_runner.go:130] > # 	"operations_errors_total",
	I0108 20:49:16.881824   35097 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 20:49:16.881830   35097 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 20:49:16.881837   35097 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 20:49:16.881842   35097 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 20:49:16.881846   35097 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 20:49:16.881850   35097 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 20:49:16.881854   35097 command_runner.go:130] > # ]
	I0108 20:49:16.881859   35097 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 20:49:16.881865   35097 command_runner.go:130] > # metrics_port = 9090
	I0108 20:49:16.881870   35097 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 20:49:16.881874   35097 command_runner.go:130] > # metrics_socket = ""
	I0108 20:49:16.881882   35097 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 20:49:16.881888   35097 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 20:49:16.881896   35097 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 20:49:16.881901   35097 command_runner.go:130] > # certificate on any modification event.
	I0108 20:49:16.881907   35097 command_runner.go:130] > # metrics_cert = ""
	I0108 20:49:16.881912   35097 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 20:49:16.881918   35097 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 20:49:16.881922   35097 command_runner.go:130] > # metrics_key = ""
	I0108 20:49:16.881931   35097 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 20:49:16.881940   35097 command_runner.go:130] > [crio.tracing]
	I0108 20:49:16.881946   35097 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 20:49:16.881952   35097 command_runner.go:130] > # enable_tracing = false
	I0108 20:49:16.881957   35097 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 20:49:16.881964   35097 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 20:49:16.881969   35097 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 20:49:16.881973   35097 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 20:49:16.881981   35097 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 20:49:16.881985   35097 command_runner.go:130] > [crio.stats]
	I0108 20:49:16.881991   35097 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 20:49:16.881998   35097 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 20:49:16.882003   35097 command_runner.go:130] > # stats_collection_period = 0
	I0108 20:49:16.882085   35097 cni.go:84] Creating CNI manager for ""
	I0108 20:49:16.882097   35097 cni.go:136] 3 nodes found, recommending kindnet
	I0108 20:49:16.882114   35097 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:49:16.882133   35097 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-340815 NodeName:multinode-340815 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:49:16.882250   35097 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-340815"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:49:16.882318   35097 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-340815 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:49:16.882370   35097 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:49:16.894503   35097 command_runner.go:130] > kubeadm
	I0108 20:49:16.894519   35097 command_runner.go:130] > kubectl
	I0108 20:49:16.894523   35097 command_runner.go:130] > kubelet
	I0108 20:49:16.894837   35097 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:49:16.894918   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 20:49:16.906002   35097 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0108 20:49:16.923896   35097 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:49:16.941521   35097 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0108 20:49:16.959876   35097 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0108 20:49:16.964206   35097 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.196	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 20:49:16.977880   35097 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815 for IP: 192.168.39.196
	I0108 20:49:16.977908   35097 certs.go:190] acquiring lock for shared ca certs: {Name:mke01aa9d73e320a9a3907677cf29c75f0fa86d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:49:16.978033   35097 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key
	I0108 20:49:16.978077   35097 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key
	I0108 20:49:16.978145   35097 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key
	I0108 20:49:16.978209   35097 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.key.85aad866
	I0108 20:49:16.978242   35097 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.key
	I0108 20:49:16.978256   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 20:49:16.978271   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 20:49:16.978285   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 20:49:16.978297   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 20:49:16.978310   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:49:16.978328   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:49:16.978347   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:49:16.978360   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:49:16.978420   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem (1338 bytes)
	W0108 20:49:16.978464   35097 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896_empty.pem, impossibly tiny 0 bytes
	I0108 20:49:16.978482   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:49:16.978514   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem (1082 bytes)
	I0108 20:49:16.978552   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:49:16.978585   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem (1675 bytes)
	I0108 20:49:16.978655   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:49:16.978692   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /usr/share/ca-certificates/178962.pem
	I0108 20:49:16.978716   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:49:16.978738   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem -> /usr/share/ca-certificates/17896.pem
	I0108 20:49:16.979391   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 20:49:17.004647   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 20:49:17.030450   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 20:49:17.054469   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 20:49:17.079296   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:49:17.104030   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:49:17.129448   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:49:17.154949   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:49:17.184612   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /usr/share/ca-certificates/178962.pem (1708 bytes)
	I0108 20:49:17.211497   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:49:17.236581   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem --> /usr/share/ca-certificates/17896.pem (1338 bytes)
	I0108 20:49:17.260619   35097 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 20:49:17.277584   35097 ssh_runner.go:195] Run: openssl version
	I0108 20:49:17.283177   35097 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 20:49:17.283583   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178962.pem && ln -fs /usr/share/ca-certificates/178962.pem /etc/ssl/certs/178962.pem"
	I0108 20:49:17.294206   35097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178962.pem
	I0108 20:49:17.299800   35097 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 20:49:17.299853   35097 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 20:49:17.299902   35097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178962.pem
	I0108 20:49:17.305822   35097 command_runner.go:130] > 3ec20f2e
	I0108 20:49:17.306133   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178962.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:49:17.316677   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:49:17.326631   35097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:49:17.331406   35097 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:49:17.331430   35097 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:49:17.331474   35097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:49:17.337075   35097 command_runner.go:130] > b5213941
	I0108 20:49:17.337147   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:49:17.346474   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17896.pem && ln -fs /usr/share/ca-certificates/17896.pem /etc/ssl/certs/17896.pem"
	I0108 20:49:17.356264   35097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17896.pem
	I0108 20:49:17.361082   35097 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 20:49:17.361124   35097 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 20:49:17.361169   35097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17896.pem
	I0108 20:49:17.366742   35097 command_runner.go:130] > 51391683
	I0108 20:49:17.366841   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17896.pem /etc/ssl/certs/51391683.0"
	I0108 20:49:17.376812   35097 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:49:17.381374   35097 command_runner.go:130] > ca.crt
	I0108 20:49:17.381409   35097 command_runner.go:130] > ca.key
	I0108 20:49:17.381417   35097 command_runner.go:130] > healthcheck-client.crt
	I0108 20:49:17.381424   35097 command_runner.go:130] > healthcheck-client.key
	I0108 20:49:17.381437   35097 command_runner.go:130] > peer.crt
	I0108 20:49:17.381443   35097 command_runner.go:130] > peer.key
	I0108 20:49:17.381450   35097 command_runner.go:130] > server.crt
	I0108 20:49:17.381460   35097 command_runner.go:130] > server.key
	I0108 20:49:17.381527   35097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 20:49:17.387841   35097 command_runner.go:130] > Certificate will not expire
	I0108 20:49:17.387924   35097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 20:49:17.394277   35097 command_runner.go:130] > Certificate will not expire
	I0108 20:49:17.394395   35097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 20:49:17.400478   35097 command_runner.go:130] > Certificate will not expire
	I0108 20:49:17.400551   35097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 20:49:17.406727   35097 command_runner.go:130] > Certificate will not expire
	I0108 20:49:17.406808   35097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 20:49:17.413006   35097 command_runner.go:130] > Certificate will not expire
	I0108 20:49:17.413090   35097 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 20:49:17.419719   35097 command_runner.go:130] > Certificate will not expire
	I0108 20:49:17.419799   35097 kubeadm.go:404] StartCluster: {Name:multinode-340815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:49:17.419913   35097 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 20:49:17.419978   35097 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:49:17.463398   35097 cri.go:89] found id: ""
	I0108 20:49:17.463465   35097 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 20:49:17.473477   35097 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0108 20:49:17.473500   35097 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0108 20:49:17.473509   35097 command_runner.go:130] > /var/lib/minikube/etcd:
	I0108 20:49:17.473515   35097 command_runner.go:130] > member
	I0108 20:49:17.473584   35097 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 20:49:17.473603   35097 kubeadm.go:636] restartCluster start
	I0108 20:49:17.473670   35097 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 20:49:17.483690   35097 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:17.484200   35097 kubeconfig.go:92] found "multinode-340815" server: "https://192.168.39.196:8443"
	I0108 20:49:17.484609   35097 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:49:17.484873   35097 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:49:17.485626   35097 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 20:49:17.485829   35097 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 20:49:17.495578   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:17.495644   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:17.506948   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:17.996588   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:17.996671   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:18.008365   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:18.495924   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:18.496030   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:18.508283   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:18.995810   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:18.995896   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:19.007683   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:19.496289   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:19.496394   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:19.507466   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:19.995662   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:19.995746   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:20.007351   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:20.495882   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:20.495953   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:20.507280   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:20.995861   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:20.995952   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:21.007369   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:21.496212   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:21.496300   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:21.507924   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:21.995672   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:21.995765   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:22.007521   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:22.495716   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:22.495794   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:22.506940   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:22.996597   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:22.996685   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:23.008603   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:23.496243   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:23.496314   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:23.507832   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:23.996403   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:23.996487   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:24.007953   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:24.496550   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:24.496632   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:24.507855   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:24.995812   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:24.995894   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:25.006988   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:25.496640   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:25.496736   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:25.508246   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:25.995766   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:25.995870   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:26.007076   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:26.495610   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:26.495687   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:26.508245   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:26.996147   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:26.996223   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:27.007832   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:27.496649   35097 api_server.go:166] Checking apiserver status ...
	I0108 20:49:27.496738   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 20:49:27.508127   35097 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 20:49:27.508158   35097 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 20:49:27.508190   35097 kubeadm.go:1135] stopping kube-system containers ...
	I0108 20:49:27.508212   35097 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 20:49:27.508262   35097 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 20:49:27.551746   35097 cri.go:89] found id: ""
	I0108 20:49:27.551816   35097 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 20:49:27.567495   35097 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 20:49:27.576788   35097 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 20:49:27.576827   35097 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 20:49:27.576838   35097 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 20:49:27.576859   35097 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:49:27.576915   35097 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 20:49:27.576978   35097 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 20:49:27.585882   35097 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 20:49:27.585912   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:49:27.715055   35097 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 20:49:27.715084   35097 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 20:49:27.715095   35097 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 20:49:27.715107   35097 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 20:49:27.715117   35097 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0108 20:49:27.715126   35097 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0108 20:49:27.715135   35097 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0108 20:49:27.715143   35097 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0108 20:49:27.715153   35097 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0108 20:49:27.715166   35097 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 20:49:27.715180   35097 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 20:49:27.715190   35097 command_runner.go:130] > [certs] Using the existing "sa" key
	I0108 20:49:27.715220   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:49:27.764313   35097 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 20:49:27.951383   35097 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 20:49:28.161125   35097 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 20:49:28.640202   35097 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 20:49:28.744547   35097 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 20:49:28.747220   35097 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.031970383s)
	I0108 20:49:28.747253   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:49:28.815928   35097 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:49:28.818617   35097 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:49:28.818677   35097 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 20:49:28.942561   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:49:29.027164   35097 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 20:49:29.027193   35097 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 20:49:29.027205   35097 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 20:49:29.028589   35097 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 20:49:29.031538   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:49:29.125282   35097 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 20:49:29.125322   35097 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:49:29.125391   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:49:29.626306   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:49:30.125914   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:49:30.626421   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:49:31.125520   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:49:31.625733   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:49:31.651058   35097 command_runner.go:130] > 1109
	I0108 20:49:31.651403   35097 api_server.go:72] duration metric: took 2.526078501s to wait for apiserver process to appear ...
	I0108 20:49:31.651422   35097 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:49:31.651442   35097 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0108 20:49:35.151196   35097 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 20:49:35.151236   35097 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 20:49:35.151249   35097 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0108 20:49:35.216552   35097 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 20:49:35.216585   35097 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 20:49:35.216608   35097 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0108 20:49:35.240745   35097 api_server.go:279] https://192.168.39.196:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 20:49:35.240779   35097 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 20:49:35.652268   35097 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0108 20:49:35.657548   35097 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 20:49:35.657593   35097 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 20:49:36.152220   35097 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0108 20:49:36.160249   35097 api_server.go:279] https://192.168.39.196:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 20:49:36.160281   35097 api_server.go:103] status: https://192.168.39.196:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 20:49:36.652514   35097 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0108 20:49:36.657419   35097 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0108 20:49:36.657487   35097 round_trippers.go:463] GET https://192.168.39.196:8443/version
	I0108 20:49:36.657497   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:36.657505   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:36.657515   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:36.665248   35097 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 20:49:36.665266   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:36.665275   35097 round_trippers.go:580]     Audit-Id: fb86e69c-1ef5-451a-bd92-80fa5b3d0f4d
	I0108 20:49:36.665291   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:36.665299   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:36.665314   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:36.665324   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:36.665333   35097 round_trippers.go:580]     Content-Length: 264
	I0108 20:49:36.665342   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:36 GMT
	I0108 20:49:36.665364   35097 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 20:49:36.665444   35097 api_server.go:141] control plane version: v1.28.4
	I0108 20:49:36.665464   35097 api_server.go:131] duration metric: took 5.014034809s to wait for apiserver health ...
	I0108 20:49:36.665475   35097 cni.go:84] Creating CNI manager for ""
	I0108 20:49:36.665486   35097 cni.go:136] 3 nodes found, recommending kindnet
	I0108 20:49:36.667538   35097 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 20:49:36.669024   35097 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:49:36.701993   35097 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 20:49:36.702028   35097 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 20:49:36.702038   35097 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 20:49:36.702049   35097 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:49:36.702070   35097 command_runner.go:130] > Access: 2024-01-08 20:49:02.982432026 +0000
	I0108 20:49:36.702078   35097 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 20:49:36.702091   35097 command_runner.go:130] > Change: 2024-01-08 20:49:01.008432026 +0000
	I0108 20:49:36.702101   35097 command_runner.go:130] >  Birth: -
	I0108 20:49:36.703240   35097 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:49:36.703258   35097 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:49:36.745277   35097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:49:38.016497   35097 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:49:38.016519   35097 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:49:38.016525   35097 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 20:49:38.016538   35097 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 20:49:38.016638   35097 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.271328406s)
	I0108 20:49:38.016670   35097 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:49:38.016774   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:49:38.016786   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.016797   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.016806   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.021519   35097 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:49:38.021542   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.021551   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.021560   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.021569   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.021577   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:37 GMT
	I0108 20:49:38.021585   35097 round_trippers.go:580]     Audit-Id: 93bff0a3-e1ef-4168-a4e2-1f911b1c6dcf
	I0108 20:49:38.021594   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.022972   35097 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"799"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83708 chars]
	I0108 20:49:38.027205   35097 system_pods.go:59] 12 kube-system pods found
	I0108 20:49:38.027248   35097 system_pods.go:61] "coredns-5dd5756b68-h4v6v" [5c1ccbb8-1747-4b6f-b40c-c54670e49d54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 20:49:38.027259   35097 system_pods.go:61] "etcd-multinode-340815" [c6d1e2c4-6dbc-4495-ac68-c4b030195c2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 20:49:38.027272   35097 system_pods.go:61] "kindnet-h48qs" [65d532d3-b3ca-493d-b287-1b03dbdad538] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 20:49:38.027286   35097 system_pods.go:61] "kindnet-tqjx8" [cb8397d0-fc25-459f-9ed2-aacb628f0e59] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 20:49:38.027303   35097 system_pods.go:61] "kindnet-wfgln" [67bb4772-2e5d-489d-93c5-df2a7254be5d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 20:49:38.027315   35097 system_pods.go:61] "kube-apiserver-multinode-340815" [523b3dcf-2fae-43b4-a9c6-cd2337ae6d6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 20:49:38.027332   35097 system_pods.go:61] "kube-controller-manager-multinode-340815" [3b29ca3f-d23b-4add-a5fb-d59381398862] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 20:49:38.027339   35097 system_pods.go:61] "kube-proxy-j5w6d" [61568130-b69e-48ce-86f0-9a9e63ed99ab] Running
	I0108 20:49:38.027346   35097 system_pods.go:61] "kube-proxy-lxkrv" [d7fed398-b2ff-4ec4-a1a6-d0a7b8dca989] Running
	I0108 20:49:38.027353   35097 system_pods.go:61] "kube-proxy-z9xrv" [a0843325-2adf-4c2f-8489-067554648b52] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 20:49:38.027358   35097 system_pods.go:61] "kube-scheduler-multinode-340815" [008c4fe8-78b1-4326-8452-215037af26d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 20:49:38.027365   35097 system_pods.go:61] "storage-provisioner" [de357297-4bd9-4c71-ada5-ceace0d38cfb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 20:49:38.027375   35097 system_pods.go:74] duration metric: took 10.696602ms to wait for pod list to return data ...
	I0108 20:49:38.027390   35097 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:49:38.027462   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0108 20:49:38.027473   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.027484   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.027498   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.030869   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:38.030894   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.030906   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.030914   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.030928   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.030936   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.030945   35097 round_trippers.go:580]     Audit-Id: cb3a3ae7-a5b7-4e44-8d83-6e060683d1d2
	I0108 20:49:38.030953   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.031198   35097 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"799"},"items":[{"metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16475 chars]
	I0108 20:49:38.032355   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:49:38.032390   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:49:38.032406   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:49:38.032413   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:49:38.032420   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:49:38.032426   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:49:38.032436   35097 node_conditions.go:105] duration metric: took 5.037366ms to run NodePressure ...
	I0108 20:49:38.032460   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 20:49:38.375466   35097 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 20:49:38.375514   35097 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 20:49:38.375550   35097 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 20:49:38.375648   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0108 20:49:38.375661   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.375673   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.375683   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.379876   35097 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:49:38.379901   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.379913   35097 round_trippers.go:580]     Audit-Id: 26623167-3829-4d34-89bc-d9537557636b
	I0108 20:49:38.379922   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.379928   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.379934   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.379938   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.379943   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.381475   35097 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"etcd-multinode-340815","namespace":"kube-system","uid":"c6d1e2c4-6dbc-4495-ac68-c4b030195c2c","resourceVersion":"794","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.196:2379","kubernetes.io/config.hash":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.mirror":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.seen":"2024-01-08T20:38:05.870869333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0108 20:49:38.382507   35097 kubeadm.go:787] kubelet initialised
	I0108 20:49:38.382526   35097 kubeadm.go:788] duration metric: took 6.966991ms waiting for restarted kubelet to initialise ...
	I0108 20:49:38.382533   35097 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:49:38.382597   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:49:38.382608   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.382619   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.382625   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.387159   35097 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:49:38.387182   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.387192   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.387201   35097 round_trippers.go:580]     Audit-Id: 6a4c9334-3e13-484f-9dd8-6f8510a31c3e
	I0108 20:49:38.387210   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.387219   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.387228   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.387234   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.388897   35097 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83207 chars]
	I0108 20:49:38.391665   35097 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:38.391742   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:38.391748   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.391755   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.391764   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.398593   35097 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0108 20:49:38.398618   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.398628   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.398636   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.398645   35097 round_trippers.go:580]     Audit-Id: 83db2d79-7017-4906-a02f-a5cd8bfedf98
	I0108 20:49:38.398654   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.398662   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.398674   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.398945   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:38.399369   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:38.399384   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.399391   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.399397   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.402662   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:38.402682   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.402692   35097 round_trippers.go:580]     Audit-Id: c2fa8952-ad12-4c11-8335-5e68bcd57ee8
	I0108 20:49:38.402700   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.402708   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.402725   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.402733   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.402746   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.403193   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:38.403480   35097 pod_ready.go:97] node "multinode-340815" hosting pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:38.403498   35097 pod_ready.go:81] duration metric: took 11.811818ms waiting for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	E0108 20:49:38.403506   35097 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-340815" hosting pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:38.403516   35097 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:38.403573   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-340815
	I0108 20:49:38.403580   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.403587   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.403593   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.405797   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:38.405815   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.405825   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.405833   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.405848   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.405861   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.405869   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.405877   35097 round_trippers.go:580]     Audit-Id: c1400b67-836b-47bb-ae67-89f3b6b9e1de
	I0108 20:49:38.406216   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-340815","namespace":"kube-system","uid":"c6d1e2c4-6dbc-4495-ac68-c4b030195c2c","resourceVersion":"794","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.196:2379","kubernetes.io/config.hash":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.mirror":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.seen":"2024-01-08T20:38:05.870869333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0108 20:49:38.406704   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:38.406722   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.406733   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.406743   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.408767   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:49:38.408787   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.408795   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.408803   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.408809   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.408816   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.408824   35097 round_trippers.go:580]     Audit-Id: b030157f-0c2b-46ba-bc07-ce4392a4bad7
	I0108 20:49:38.408833   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.408998   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:38.409267   35097 pod_ready.go:97] node "multinode-340815" hosting pod "etcd-multinode-340815" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:38.409282   35097 pod_ready.go:81] duration metric: took 5.754476ms waiting for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	E0108 20:49:38.409290   35097 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-340815" hosting pod "etcd-multinode-340815" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:38.409307   35097 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:38.409365   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-340815
	I0108 20:49:38.409372   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.409379   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.409387   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.411399   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:49:38.411425   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.411435   35097 round_trippers.go:580]     Audit-Id: c8f686ba-6800-47c4-8fb3-d779a5e304a1
	I0108 20:49:38.411443   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.411451   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.411459   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.411471   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.411479   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.411651   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-340815","namespace":"kube-system","uid":"523b3dcf-2fae-43b4-a9c6-cd2337ae6d6f","resourceVersion":"795","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.196:8443","kubernetes.io/config.hash":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.mirror":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.seen":"2024-01-08T20:38:05.870870627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0108 20:49:38.412200   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:38.412218   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.412229   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.412241   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.414277   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:38.414297   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.414306   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.414314   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.414321   35097 round_trippers.go:580]     Audit-Id: 7fda3034-4c60-4308-944f-b785a97c9145
	I0108 20:49:38.414329   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.414347   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.414354   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.414997   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:38.415408   35097 pod_ready.go:97] node "multinode-340815" hosting pod "kube-apiserver-multinode-340815" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:38.415433   35097 pod_ready.go:81] duration metric: took 6.113859ms waiting for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	E0108 20:49:38.415444   35097 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-340815" hosting pod "kube-apiserver-multinode-340815" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:38.415457   35097 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:38.415535   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-340815
	I0108 20:49:38.415547   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.415557   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.415566   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.418096   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:38.418115   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.418124   35097 round_trippers.go:580]     Audit-Id: 34cc500e-c043-4132-90fd-078e12149bb5
	I0108 20:49:38.418130   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.418135   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.418140   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.418145   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.418157   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.418308   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-340815","namespace":"kube-system","uid":"3b29ca3f-d23b-4add-a5fb-d59381398862","resourceVersion":"789","creationTimestamp":"2024-01-08T20:38:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.mirror":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.seen":"2024-01-08T20:37:56.785419514Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0108 20:49:38.418697   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:38.418709   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.418719   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.418725   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.421567   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:38.421588   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.421598   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.421606   35097 round_trippers.go:580]     Audit-Id: 850746da-a460-45ae-aa54-2b2e19492254
	I0108 20:49:38.421614   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.421623   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.421636   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.421648   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.421829   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:38.422137   35097 pod_ready.go:97] node "multinode-340815" hosting pod "kube-controller-manager-multinode-340815" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:38.422154   35097 pod_ready.go:81] duration metric: took 6.685653ms waiting for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	E0108 20:49:38.422162   35097 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-340815" hosting pod "kube-controller-manager-multinode-340815" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:38.422170   35097 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j5w6d" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:38.617627   35097 request.go:629] Waited for 195.369972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:49:38.617712   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:49:38.617724   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.617738   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.617751   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.626619   35097 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 20:49:38.626641   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.626648   35097 round_trippers.go:580]     Audit-Id: 0c9adefe-511e-4568-89dc-b463b1bfe2ef
	I0108 20:49:38.626654   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.626659   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.626664   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.626669   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.626674   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.627242   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5w6d","generateName":"kube-proxy-","namespace":"kube-system","uid":"61568130-b69e-48ce-86f0-9a9e63ed99ab","resourceVersion":"522","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0108 20:49:38.817138   35097 request.go:629] Waited for 189.351391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:49:38.817217   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:49:38.817224   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:38.817233   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:38.817252   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:38.819746   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:38.819772   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:38.819779   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:38.819784   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:38.819789   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:38.819794   35097 round_trippers.go:580]     Audit-Id: 50cbbbfb-0c19-44bb-9125-4d121c5d11fb
	I0108 20:49:38.819799   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:38.819804   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:38.819974   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"788","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_41_37_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0108 20:49:38.820360   35097 pod_ready.go:92] pod "kube-proxy-j5w6d" in "kube-system" namespace has status "Ready":"True"
	I0108 20:49:38.820384   35097 pod_ready.go:81] duration metric: took 398.20657ms waiting for pod "kube-proxy-j5w6d" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:38.820397   35097 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lxkrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:39.017417   35097 request.go:629] Waited for 196.921933ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lxkrv
	I0108 20:49:39.017520   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lxkrv
	I0108 20:49:39.017534   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:39.017546   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:39.017556   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:39.020361   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:39.020386   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:39.020399   35097 round_trippers.go:580]     Audit-Id: 49d6953c-0344-494e-ad7d-b21bbcfb33e7
	I0108 20:49:39.020407   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:39.020424   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:39.020433   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:39.020445   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:39.020452   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:38 GMT
	I0108 20:49:39.020617   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lxkrv","generateName":"kube-proxy-","namespace":"kube-system","uid":"d7fed398-b2ff-4ec4-a1a6-d0a7b8dca989","resourceVersion":"739","creationTimestamp":"2024-01-08T20:40:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:40:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 20:49:39.217534   35097 request.go:629] Waited for 196.418636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:49:39.217612   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:49:39.217619   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:39.217631   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:39.217640   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:39.220125   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:39.220150   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:39.220161   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:39 GMT
	I0108 20:49:39.220169   35097 round_trippers.go:580]     Audit-Id: fd1040ea-b4c4-45d4-a168-199c8305620b
	I0108 20:49:39.220177   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:39.220194   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:39.220205   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:39.220216   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:39.220440   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m03","uid":"f402a58c-763c-4188-b0f9-533674f03d66","resourceVersion":"761","creationTimestamp":"2024-01-08T20:41:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_41_37_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:41:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4085 chars]
	I0108 20:49:39.220799   35097 pod_ready.go:92] pod "kube-proxy-lxkrv" in "kube-system" namespace has status "Ready":"True"
	I0108 20:49:39.220818   35097 pod_ready.go:81] duration metric: took 400.40772ms waiting for pod "kube-proxy-lxkrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:39.220831   35097 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:39.416780   35097 request.go:629] Waited for 195.871227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9xrv
	I0108 20:49:39.416841   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9xrv
	I0108 20:49:39.416848   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:39.416873   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:39.416882   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:39.419690   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:39.419714   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:39.419723   35097 round_trippers.go:580]     Audit-Id: fdbb7889-0d6b-41a0-952f-bf36cb66cb5a
	I0108 20:49:39.419732   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:39.419739   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:39.419761   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:39.419774   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:39.419790   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:39 GMT
	I0108 20:49:39.419980   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z9xrv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a0843325-2adf-4c2f-8489-067554648b52","resourceVersion":"810","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 20:49:39.616800   35097 request.go:629] Waited for 196.372369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:39.616891   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:39.616901   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:39.616909   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:39.616924   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:39.619658   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:39.619683   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:39.619702   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:39.619708   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:39.619720   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:39 GMT
	I0108 20:49:39.619730   35097 round_trippers.go:580]     Audit-Id: 87e12038-be6c-4a42-aade-e7f334917125
	I0108 20:49:39.619738   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:39.619746   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:39.619944   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:39.620378   35097 pod_ready.go:97] node "multinode-340815" hosting pod "kube-proxy-z9xrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:39.620404   35097 pod_ready.go:81] duration metric: took 399.557778ms waiting for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	E0108 20:49:39.620416   35097 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-340815" hosting pod "kube-proxy-z9xrv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:39.620457   35097 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:39.816793   35097 request.go:629] Waited for 196.265978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:49:39.816868   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:49:39.816873   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:39.816884   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:39.816894   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:39.819457   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:39.819484   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:39.819494   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:39.819503   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:39.819512   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:39.819520   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:39 GMT
	I0108 20:49:39.819536   35097 round_trippers.go:580]     Audit-Id: 1efff9c6-f113-4b63-8aab-8bca39d6cd27
	I0108 20:49:39.819544   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:39.819765   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-340815","namespace":"kube-system","uid":"008c4fe8-78b1-4326-8452-215037af26d6","resourceVersion":"790","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.mirror":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.seen":"2024-01-08T20:38:05.870865233Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0108 20:49:40.017633   35097 request.go:629] Waited for 197.391371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:40.017686   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:40.017698   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:40.017711   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:40.017726   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:40.020504   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:40.020524   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:40.020531   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:39 GMT
	I0108 20:49:40.020536   35097 round_trippers.go:580]     Audit-Id: 4177be7d-4045-486d-81c1-0317488f5a00
	I0108 20:49:40.020548   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:40.020561   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:40.020570   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:40.020582   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:40.020756   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:40.021075   35097 pod_ready.go:97] node "multinode-340815" hosting pod "kube-scheduler-multinode-340815" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:40.021099   35097 pod_ready.go:81] duration metric: took 400.634351ms waiting for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	E0108 20:49:40.021109   35097 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-340815" hosting pod "kube-scheduler-multinode-340815" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-340815" has status "Ready":"False"
	I0108 20:49:40.021120   35097 pod_ready.go:38] duration metric: took 1.638580509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:49:40.021141   35097 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 20:49:40.032389   35097 command_runner.go:130] > -16
	I0108 20:49:40.032427   35097 ops.go:34] apiserver oom_adj: -16
	I0108 20:49:40.032435   35097 kubeadm.go:640] restartCluster took 22.558825316s
	I0108 20:49:40.032443   35097 kubeadm.go:406] StartCluster complete in 22.612649608s
	I0108 20:49:40.032457   35097 settings.go:142] acquiring lock: {Name:mk91d3baf51872e4bb0758b94fca7c7249bb9666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:49:40.032522   35097 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:49:40.033118   35097 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/kubeconfig: {Name:mkeb2e8a20e31c0c2d5c7e8214a27af3141300ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:49:40.033361   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 20:49:40.033499   35097 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 20:49:40.036422   35097 out.go:177] * Enabled addons: 
	I0108 20:49:40.033607   35097 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:49:40.033636   35097 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:49:40.038006   35097 addons.go:508] enable addons completed in 4.547237ms: enabled=[]
	I0108 20:49:40.036791   35097 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:49:40.038275   35097 round_trippers.go:463] GET https://192.168.39.196:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:49:40.038284   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:40.038291   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:40.038297   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:40.041074   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:40.041091   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:40.041098   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:40 GMT
	I0108 20:49:40.041107   35097 round_trippers.go:580]     Audit-Id: 67a0cbdf-5efe-4125-8983-1f114964e340
	I0108 20:49:40.041115   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:40.041125   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:40.041137   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:40.041146   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:40.041159   35097 round_trippers.go:580]     Content-Length: 291
	I0108 20:49:40.041204   35097 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a90ea09-afeb-4dda-ab10-18a22e37ea78","resourceVersion":"801","creationTimestamp":"2024-01-08T20:38:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 20:49:40.041390   35097 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-340815" context rescaled to 1 replicas
	I0108 20:49:40.041429   35097 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 20:49:40.043096   35097 out.go:177] * Verifying Kubernetes components...
	I0108 20:49:40.044627   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:49:40.132471   35097 command_runner.go:130] > apiVersion: v1
	I0108 20:49:40.132501   35097 command_runner.go:130] > data:
	I0108 20:49:40.132508   35097 command_runner.go:130] >   Corefile: |
	I0108 20:49:40.132514   35097 command_runner.go:130] >     .:53 {
	I0108 20:49:40.132519   35097 command_runner.go:130] >         log
	I0108 20:49:40.132526   35097 command_runner.go:130] >         errors
	I0108 20:49:40.132532   35097 command_runner.go:130] >         health {
	I0108 20:49:40.132545   35097 command_runner.go:130] >            lameduck 5s
	I0108 20:49:40.132551   35097 command_runner.go:130] >         }
	I0108 20:49:40.132558   35097 command_runner.go:130] >         ready
	I0108 20:49:40.132565   35097 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 20:49:40.132571   35097 command_runner.go:130] >            pods insecure
	I0108 20:49:40.132582   35097 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 20:49:40.132588   35097 command_runner.go:130] >            ttl 30
	I0108 20:49:40.132595   35097 command_runner.go:130] >         }
	I0108 20:49:40.132605   35097 command_runner.go:130] >         prometheus :9153
	I0108 20:49:40.132612   35097 command_runner.go:130] >         hosts {
	I0108 20:49:40.132623   35097 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0108 20:49:40.132629   35097 command_runner.go:130] >            fallthrough
	I0108 20:49:40.132638   35097 command_runner.go:130] >         }
	I0108 20:49:40.132647   35097 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 20:49:40.132654   35097 command_runner.go:130] >            max_concurrent 1000
	I0108 20:49:40.132663   35097 command_runner.go:130] >         }
	I0108 20:49:40.132670   35097 command_runner.go:130] >         cache 30
	I0108 20:49:40.132678   35097 command_runner.go:130] >         loop
	I0108 20:49:40.132692   35097 command_runner.go:130] >         reload
	I0108 20:49:40.132701   35097 command_runner.go:130] >         loadbalance
	I0108 20:49:40.132710   35097 command_runner.go:130] >     }
	I0108 20:49:40.132717   35097 command_runner.go:130] > kind: ConfigMap
	I0108 20:49:40.132724   35097 command_runner.go:130] > metadata:
	I0108 20:49:40.132735   35097 command_runner.go:130] >   creationTimestamp: "2024-01-08T20:38:05Z"
	I0108 20:49:40.132749   35097 command_runner.go:130] >   name: coredns
	I0108 20:49:40.132759   35097 command_runner.go:130] >   namespace: kube-system
	I0108 20:49:40.132767   35097 command_runner.go:130] >   resourceVersion: "355"
	I0108 20:49:40.132772   35097 command_runner.go:130] >   uid: d5a0581d-11a8-42c8-8842-c8e10f16d3a9
	I0108 20:49:40.132845   35097 node_ready.go:35] waiting up to 6m0s for node "multinode-340815" to be "Ready" ...
	I0108 20:49:40.132861   35097 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 20:49:40.217198   35097 request.go:629] Waited for 84.258371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:40.217291   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:40.217298   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:40.217310   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:40.217320   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:40.279525   35097 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I0108 20:49:40.279553   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:40.279560   35097 round_trippers.go:580]     Audit-Id: f18aa0f1-ba4d-44c1-8224-3e6c78d8de97
	I0108 20:49:40.279565   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:40.279570   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:40.279575   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:40.279581   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:40.279586   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:40 GMT
	I0108 20:49:40.282007   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:40.633343   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:40.633369   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:40.633381   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:40.633387   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:40.636237   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:40.636261   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:40.636271   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:40.636279   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:40 GMT
	I0108 20:49:40.636288   35097 round_trippers.go:580]     Audit-Id: 1bb2137e-02b7-4392-a45d-fcf878ecdb31
	I0108 20:49:40.636301   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:40.636310   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:40.636317   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:40.636556   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:41.133220   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:41.133247   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:41.133255   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:41.133261   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:41.136054   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:41.136107   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:41.136120   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:41.136129   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:41 GMT
	I0108 20:49:41.136142   35097 round_trippers.go:580]     Audit-Id: 897432a2-747d-4fa3-8cb0-71331bcde93f
	I0108 20:49:41.136153   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:41.136162   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:41.136174   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:41.136466   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:41.634005   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:41.634032   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:41.634040   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:41.634048   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:41.636771   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:41.636795   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:41.636804   35097 round_trippers.go:580]     Audit-Id: 722e4a6e-62eb-4b6e-97fa-1b3b68a5b7f3
	I0108 20:49:41.636811   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:41.636819   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:41.636825   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:41.636833   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:41.636840   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:41 GMT
	I0108 20:49:41.637208   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:42.133346   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:42.133371   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:42.133379   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:42.133385   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:42.136227   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:42.136247   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:42.136258   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:42 GMT
	I0108 20:49:42.136264   35097 round_trippers.go:580]     Audit-Id: eca60b67-0e45-4348-9b90-da2f234529b0
	I0108 20:49:42.136269   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:42.136274   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:42.136279   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:42.136284   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:42.136558   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:42.136852   35097 node_ready.go:58] node "multinode-340815" has status "Ready":"False"
	I0108 20:49:42.633093   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:42.633130   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:42.633138   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:42.633144   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:42.635811   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:42.635835   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:42.635845   35097 round_trippers.go:580]     Audit-Id: d30251cd-69c6-4dea-8343-a3ea433ed223
	I0108 20:49:42.635854   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:42.635861   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:42.635869   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:42.635880   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:42.635888   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:42 GMT
	I0108 20:49:42.636280   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:43.134021   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:43.134045   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:43.134053   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:43.134059   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:43.137076   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:43.137099   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:43.137106   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:43.137121   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:43.137126   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:43 GMT
	I0108 20:49:43.137132   35097 round_trippers.go:580]     Audit-Id: bbe45ee8-fc6b-4a92-bce9-13030311f8d3
	I0108 20:49:43.137137   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:43.137142   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:43.137626   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:43.633271   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:43.633296   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:43.633310   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:43.633316   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:43.636062   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:43.636083   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:43.636105   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:43.636114   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:43.636125   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:43.636139   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:43.636148   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:43 GMT
	I0108 20:49:43.636159   35097 round_trippers.go:580]     Audit-Id: 6637582a-65f5-4f51-ad77-e16e228c5806
	I0108 20:49:43.636475   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:44.133080   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:44.133118   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:44.133138   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:44.133144   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:44.135989   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:44.136010   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:44.136020   35097 round_trippers.go:580]     Audit-Id: 906c01c1-e41b-4029-8c0a-3a2e5a19b91b
	I0108 20:49:44.136032   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:44.136041   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:44.136054   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:44.136062   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:44.136074   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:44 GMT
	I0108 20:49:44.136235   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:44.633838   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:44.633863   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:44.633871   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:44.633877   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:44.638167   35097 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:49:44.638200   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:44.638223   35097 round_trippers.go:580]     Audit-Id: 14aa1b3d-b06f-47ab-95f3-02623597c71b
	I0108 20:49:44.638232   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:44.638241   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:44.638250   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:44.638259   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:44.638268   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:44 GMT
	I0108 20:49:44.638449   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:44.638865   35097 node_ready.go:58] node "multinode-340815" has status "Ready":"False"
	I0108 20:49:45.133599   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:45.133632   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:45.133648   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:45.133657   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:45.136544   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:45.136567   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:45.136575   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:45.136583   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:45 GMT
	I0108 20:49:45.136591   35097 round_trippers.go:580]     Audit-Id: 60a5bee3-1608-4747-b634-8e572da59289
	I0108 20:49:45.136598   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:45.136605   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:45.136612   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:45.136881   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"769","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0108 20:49:45.633500   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:45.633523   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:45.633532   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:45.633538   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:45.635997   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:45.636016   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:45.636028   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:45.636036   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:45 GMT
	I0108 20:49:45.636044   35097 round_trippers.go:580]     Audit-Id: 099c0bf0-cdb4-428c-9c8a-d2da1a38c82f
	I0108 20:49:45.636050   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:45.636058   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:45.636066   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:45.636314   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:45.636746   35097 node_ready.go:49] node "multinode-340815" has status "Ready":"True"
	I0108 20:49:45.636772   35097 node_ready.go:38] duration metric: took 5.503903522s waiting for node "multinode-340815" to be "Ready" ...
	I0108 20:49:45.636787   35097 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:49:45.636865   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:49:45.636876   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:45.636886   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:45.636898   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:45.641266   35097 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:49:45.641282   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:45.641288   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:45.641293   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:45.641307   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:45.641313   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:45.641318   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:45 GMT
	I0108 20:49:45.641324   35097 round_trippers.go:580]     Audit-Id: ec498a0f-19e3-446e-9b7b-2c2c5a30aa87
	I0108 20:49:45.643287   35097 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"895"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82713 chars]
	I0108 20:49:45.645920   35097 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:45.645995   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:45.646002   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:45.646009   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:45.646017   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:45.648626   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:45.648641   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:45.648647   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:45.648652   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:45.648657   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:45 GMT
	I0108 20:49:45.648662   35097 round_trippers.go:580]     Audit-Id: 393ed9d3-2c34-409d-85a7-4a60ab226fd0
	I0108 20:49:45.648667   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:45.648673   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:45.648850   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:45.649278   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:45.649292   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:45.649299   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:45.649304   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:45.651069   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:49:45.651083   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:45.651088   35097 round_trippers.go:580]     Audit-Id: 9c3bebaa-3dbb-4d8f-87aa-ca6f1ace558c
	I0108 20:49:45.651094   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:45.651099   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:45.651103   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:45.651108   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:45.651116   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:45 GMT
	I0108 20:49:45.651264   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:46.146948   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:46.146974   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:46.146982   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:46.146988   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:46.150991   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:46.151012   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:46.151020   35097 round_trippers.go:580]     Audit-Id: db37d34f-41b4-43a4-a79e-0fb5b226ca0b
	I0108 20:49:46.151028   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:46.151037   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:46.151046   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:46.151054   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:46.151063   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:46 GMT
	I0108 20:49:46.151227   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:46.151663   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:46.151676   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:46.151683   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:46.151688   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:46.154407   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:46.154433   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:46.154443   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:46.154451   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:46 GMT
	I0108 20:49:46.154459   35097 round_trippers.go:580]     Audit-Id: 907453fc-c8cd-4531-a11e-d780a949873f
	I0108 20:49:46.154468   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:46.154480   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:46.154488   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:46.154640   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:46.647123   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:46.647163   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:46.647175   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:46.647184   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:46.650668   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:46.650691   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:46.650699   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:46 GMT
	I0108 20:49:46.650704   35097 round_trippers.go:580]     Audit-Id: b86966a9-1e0b-4746-b66a-f5d4fac548db
	I0108 20:49:46.650709   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:46.650714   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:46.650726   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:46.650734   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:46.651355   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:46.651931   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:46.651947   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:46.651954   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:46.651960   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:46.655052   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:46.655071   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:46.655078   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:46.655083   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:46.655088   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:46.655093   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:46 GMT
	I0108 20:49:46.655098   35097 round_trippers.go:580]     Audit-Id: 1ec72b73-c68c-4a33-bc7b-fa13fb848993
	I0108 20:49:46.655103   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:46.655331   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:47.146716   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:47.146738   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:47.146747   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:47.146753   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:47.150218   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:47.150245   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:47.150260   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:47.150270   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:47.150278   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:47.150288   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:47 GMT
	I0108 20:49:47.150301   35097 round_trippers.go:580]     Audit-Id: 339cf35e-fc1a-4386-9e03-040c04cb7b13
	I0108 20:49:47.150313   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:47.150501   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:47.151003   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:47.151019   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:47.151027   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:47.151035   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:47.153526   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:47.153543   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:47.153554   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:47.153561   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:47.153569   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:47.153582   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:47 GMT
	I0108 20:49:47.153604   35097 round_trippers.go:580]     Audit-Id: 0a2fb09e-f3b6-477d-9c9f-2d2a097b6e1c
	I0108 20:49:47.153613   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:47.153894   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:47.646816   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:47.646838   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:47.646846   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:47.646852   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:47.652825   35097 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 20:49:47.652853   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:47.652863   35097 round_trippers.go:580]     Audit-Id: 1631b3ae-1e9a-4b83-9061-135d1c5af07c
	I0108 20:49:47.652871   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:47.652880   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:47.652889   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:47.652907   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:47.652916   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:47 GMT
	I0108 20:49:47.653416   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:47.653846   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:47.653859   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:47.653866   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:47.653872   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:47.658104   35097 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:49:47.658128   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:47.658138   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:47.658150   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:47 GMT
	I0108 20:49:47.658158   35097 round_trippers.go:580]     Audit-Id: 5dcbfe54-dd07-4cc5-a56b-68a7d9188814
	I0108 20:49:47.658165   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:47.658173   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:47.658182   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:47.658297   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:47.658581   35097 pod_ready.go:102] pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace has status "Ready":"False"
	I0108 20:49:48.146930   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:48.146954   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:48.146962   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:48.146968   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:48.149744   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:48.149762   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:48.149768   35097 round_trippers.go:580]     Audit-Id: 832edf86-4d38-4099-9207-0332fc8f193b
	I0108 20:49:48.149774   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:48.149779   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:48.149786   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:48.149795   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:48.149809   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:48 GMT
	I0108 20:49:48.149980   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:48.150425   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:48.150441   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:48.150448   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:48.150454   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:48.154139   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:48.154160   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:48.154167   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:48 GMT
	I0108 20:49:48.154172   35097 round_trippers.go:580]     Audit-Id: 81dda3d1-6a50-4541-925d-f39612ad94de
	I0108 20:49:48.154177   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:48.154182   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:48.154187   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:48.154192   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:48.154315   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:48.647107   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:48.647149   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:48.647157   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:48.647163   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:48.650074   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:48.650101   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:48.650117   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:48.650125   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:48.650132   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:48 GMT
	I0108 20:49:48.650138   35097 round_trippers.go:580]     Audit-Id: 6bf583b0-4e40-4880-83db-8857f5857eec
	I0108 20:49:48.650144   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:48.650150   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:48.650885   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:48.651415   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:48.651431   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:48.651443   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:48.651453   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:48.653980   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:48.653996   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:48.654002   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:48 GMT
	I0108 20:49:48.654007   35097 round_trippers.go:580]     Audit-Id: d68c9c41-41ee-4902-a6de-835287631866
	I0108 20:49:48.654012   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:48.654017   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:48.654022   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:48.654027   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:48.654240   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:49.147001   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:49.147026   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:49.147036   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:49.147042   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:49.149819   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:49.149849   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:49.149857   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:49 GMT
	I0108 20:49:49.149862   35097 round_trippers.go:580]     Audit-Id: 5b7b79fb-b127-444c-ace4-45931b7cfc57
	I0108 20:49:49.149867   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:49.149881   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:49.149886   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:49.149891   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:49.150733   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:49.151162   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:49.151175   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:49.151182   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:49.151188   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:49.153663   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:49.153679   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:49.153685   35097 round_trippers.go:580]     Audit-Id: ff988c86-289c-4f45-99d8-719608eaf313
	I0108 20:49:49.153691   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:49.153696   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:49.153701   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:49.153708   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:49.153716   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:49 GMT
	I0108 20:49:49.154128   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:49.646873   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:49.646909   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:49.646917   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:49.646923   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:49.649617   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:49.649658   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:49.649669   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:49.649678   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:49.649686   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:49.649694   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:49 GMT
	I0108 20:49:49.649702   35097 round_trippers.go:580]     Audit-Id: 762a26bb-7063-4a99-b29b-73d8610288f7
	I0108 20:49:49.649715   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:49.650180   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:49.650615   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:49.650627   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:49.650634   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:49.650640   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:49.652943   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:49.652968   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:49.652975   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:49.652980   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:49.652987   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:49.652999   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:49 GMT
	I0108 20:49:49.653012   35097 round_trippers.go:580]     Audit-Id: d23540ff-38aa-4abc-8999-b21263c7836e
	I0108 20:49:49.653021   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:49.653144   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:50.146170   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:50.146196   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:50.146206   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:50.146215   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:50.149014   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:50.149036   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:50.149046   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:50.149053   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:50.149061   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:50 GMT
	I0108 20:49:50.149069   35097 round_trippers.go:580]     Audit-Id: b2e097af-fb7d-4770-8ba1-559ae941568a
	I0108 20:49:50.149084   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:50.149090   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:50.149481   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:50.150047   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:50.150066   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:50.150073   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:50.150080   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:50.152490   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:50.152511   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:50.152520   35097 round_trippers.go:580]     Audit-Id: e4a0dc5e-81dd-41b2-8bc1-5bc65ca1b898
	I0108 20:49:50.152528   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:50.152540   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:50.152548   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:50.152555   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:50.152564   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:50 GMT
	I0108 20:49:50.152738   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:50.153017   35097 pod_ready.go:102] pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace has status "Ready":"False"
	I0108 20:49:50.646358   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:50.646389   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:50.646400   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:50.646408   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:50.649229   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:50.649256   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:50.649267   35097 round_trippers.go:580]     Audit-Id: c1613cd2-0377-49c7-a44d-2a82b33802c5
	I0108 20:49:50.649276   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:50.649285   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:50.649294   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:50.649303   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:50.649309   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:50 GMT
	I0108 20:49:50.649456   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:50.650037   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:50.650057   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:50.650068   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:50.650082   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:50.652573   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:50.652596   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:50.652607   35097 round_trippers.go:580]     Audit-Id: 205ee4e4-933b-4ca9-9599-9906998a10e8
	I0108 20:49:50.652615   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:50.652622   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:50.652629   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:50.652638   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:50.652650   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:50 GMT
	I0108 20:49:50.652812   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:51.146422   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:51.146449   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:51.146457   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:51.146487   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:51.149355   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:51.149377   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:51.149384   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:51.149389   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:51.149402   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:51 GMT
	I0108 20:49:51.149408   35097 round_trippers.go:580]     Audit-Id: 83d1dad6-eb7b-4651-8494-4778759e96ea
	I0108 20:49:51.149416   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:51.149424   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:51.149593   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:51.150143   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:51.150160   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:51.150174   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:51.150182   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:51.153138   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:51.153159   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:51.153169   35097 round_trippers.go:580]     Audit-Id: ab4ce5c9-490b-4f70-8565-3e88de06ec0e
	I0108 20:49:51.153176   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:51.153183   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:51.153190   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:51.153198   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:51.153206   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:51 GMT
	I0108 20:49:51.153445   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:51.646908   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:51.646933   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:51.646941   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:51.646947   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:51.650413   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:51.650436   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:51.650446   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:51.650458   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:51.650465   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:51 GMT
	I0108 20:49:51.650472   35097 round_trippers.go:580]     Audit-Id: 2ed3e260-44f9-4b49-98f0-9391be148fd1
	I0108 20:49:51.650479   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:51.650487   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:51.650820   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:51.651253   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:51.651268   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:51.651278   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:51.651287   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:51.653807   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:51.653825   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:51.653832   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:51.653841   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:51.653849   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:51.653857   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:51.653874   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:51 GMT
	I0108 20:49:51.653885   35097 round_trippers.go:580]     Audit-Id: 37b0d2e3-c6e8-481a-b8ef-190524c621f2
	I0108 20:49:51.654011   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:52.147150   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:52.147176   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:52.147184   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:52.147190   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:52.150400   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:52.150425   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:52.150440   35097 round_trippers.go:580]     Audit-Id: 7b2ad219-c835-4e54-9efe-3c1fdbee4a6a
	I0108 20:49:52.150453   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:52.150466   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:52.150473   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:52.150482   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:52.150493   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:52 GMT
	I0108 20:49:52.150653   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:52.151151   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:52.151167   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:52.151174   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:52.151184   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:52.153430   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:52.153450   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:52.153459   35097 round_trippers.go:580]     Audit-Id: e74e340d-0607-4183-a8b0-eb1f20efa245
	I0108 20:49:52.153467   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:52.153475   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:52.153496   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:52.153511   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:52.153523   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:52 GMT
	I0108 20:49:52.153685   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:52.153995   35097 pod_ready.go:102] pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace has status "Ready":"False"
	I0108 20:49:52.646327   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:52.646359   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:52.646371   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:52.646381   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:52.649079   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:52.649098   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:52.649106   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:52.649113   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:52 GMT
	I0108 20:49:52.649121   35097 round_trippers.go:580]     Audit-Id: 450cd7d6-c6a9-4838-82ac-e9846dc2d2a2
	I0108 20:49:52.649135   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:52.649144   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:52.649156   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:52.649319   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:52.649797   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:52.649812   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:52.649821   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:52.649830   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:52.651831   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:49:52.651844   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:52.651850   35097 round_trippers.go:580]     Audit-Id: dc9efedc-e105-4ac7-8413-28142cc1d3b1
	I0108 20:49:52.651855   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:52.651862   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:52.651870   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:52.651878   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:52.651887   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:52 GMT
	I0108 20:49:52.652095   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:53.146856   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:53.146881   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.146890   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.146896   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.150068   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:53.150091   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.150101   35097 round_trippers.go:580]     Audit-Id: bead0bf2-f3bb-415b-86f0-084c4f0bbeb7
	I0108 20:49:53.150109   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.150118   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.150128   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.150136   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.150144   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.150354   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"796","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0108 20:49:53.150795   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:53.150809   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.150820   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.150826   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.153164   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:53.153182   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.153188   35097 round_trippers.go:580]     Audit-Id: 93aadbc0-227d-4135-9efa-d4ad175ab4c6
	I0108 20:49:53.153193   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.153198   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.153206   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.153214   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.153222   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.153390   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:53.647099   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:49:53.647124   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.647132   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.647138   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.650765   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:53.650794   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.650805   35097 round_trippers.go:580]     Audit-Id: b6b12c06-824f-46a6-afca-c953a34d6632
	I0108 20:49:53.650816   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.650824   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.650832   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.650841   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.650849   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.651034   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"924","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0108 20:49:53.651644   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:53.651666   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.651677   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.651686   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.653944   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:53.653968   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.653977   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.653982   35097 round_trippers.go:580]     Audit-Id: 29d93278-132a-452d-a0f3-b0b27cbf6114
	I0108 20:49:53.653988   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.653993   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.654001   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.654006   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.654131   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:53.654541   35097 pod_ready.go:92] pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace has status "Ready":"True"
	I0108 20:49:53.654563   35097 pod_ready.go:81] duration metric: took 8.008622914s waiting for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:53.654573   35097 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:53.654642   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-340815
	I0108 20:49:53.654653   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.654660   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.654669   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.656847   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:53.656868   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.656877   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.656885   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.656893   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.656901   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.656910   35097 round_trippers.go:580]     Audit-Id: ed21d7f6-5a5b-45c9-8166-3b18efbdf2e6
	I0108 20:49:53.656922   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.657076   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-340815","namespace":"kube-system","uid":"c6d1e2c4-6dbc-4495-ac68-c4b030195c2c","resourceVersion":"916","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.196:2379","kubernetes.io/config.hash":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.mirror":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.seen":"2024-01-08T20:38:05.870869333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0108 20:49:53.657551   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:53.657569   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.657579   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.657588   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.659396   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:49:53.659410   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.659416   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.659422   35097 round_trippers.go:580]     Audit-Id: 52f7ff13-7de1-4907-9029-1c32d5ccdbc9
	I0108 20:49:53.659428   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.659437   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.659445   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.659452   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.659733   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:53.660040   35097 pod_ready.go:92] pod "etcd-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:49:53.660055   35097 pod_ready.go:81] duration metric: took 5.474779ms waiting for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:53.660072   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:53.660160   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-340815
	I0108 20:49:53.660176   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.660187   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.660204   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.662145   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:49:53.662161   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.662170   35097 round_trippers.go:580]     Audit-Id: c1368135-f40b-430e-ae13-cee15bb5d96d
	I0108 20:49:53.662179   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.662186   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.662194   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.662203   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.662212   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.662859   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-340815","namespace":"kube-system","uid":"523b3dcf-2fae-43b4-a9c6-cd2337ae6d6f","resourceVersion":"914","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.196:8443","kubernetes.io/config.hash":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.mirror":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.seen":"2024-01-08T20:38:05.870870627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0108 20:49:53.663539   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:53.663559   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.663570   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.663580   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.666415   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:53.666433   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.666439   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.666445   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.666451   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.666456   35097 round_trippers.go:580]     Audit-Id: ab705246-07aa-48e2-b196-c4abce89aad8
	I0108 20:49:53.666460   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.666465   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.666702   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:53.666959   35097 pod_ready.go:92] pod "kube-apiserver-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:49:53.666971   35097 pod_ready.go:81] duration metric: took 6.89154ms waiting for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:53.666979   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:53.667028   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-340815
	I0108 20:49:53.667038   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.667053   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.667062   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.668941   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:49:53.668960   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.668967   35097 round_trippers.go:580]     Audit-Id: 5c40c393-d98e-4152-9388-da3fe93085d4
	I0108 20:49:53.668973   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.668980   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.669014   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.669030   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.669038   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.669169   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-340815","namespace":"kube-system","uid":"3b29ca3f-d23b-4add-a5fb-d59381398862","resourceVersion":"912","creationTimestamp":"2024-01-08T20:38:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.mirror":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.seen":"2024-01-08T20:37:56.785419514Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0108 20:49:53.669488   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:53.669503   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.669512   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.669518   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.671130   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:49:53.671145   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.671151   35097 round_trippers.go:580]     Audit-Id: 686f70be-8034-40c3-95e1-2b1f40dc7804
	I0108 20:49:53.671165   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.671174   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.671183   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.671192   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.671209   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.671360   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:53.671646   35097 pod_ready.go:92] pod "kube-controller-manager-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:49:53.671661   35097 pod_ready.go:81] duration metric: took 4.677352ms waiting for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:53.671670   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j5w6d" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:53.671730   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:49:53.671738   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.671744   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.671750   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.673386   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:49:53.673400   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.673406   35097 round_trippers.go:580]     Audit-Id: ae08f8c0-929b-4100-a211-ddffb25e89a7
	I0108 20:49:53.673411   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.673416   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.673428   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.673447   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.673455   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.673635   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5w6d","generateName":"kube-proxy-","namespace":"kube-system","uid":"61568130-b69e-48ce-86f0-9a9e63ed99ab","resourceVersion":"522","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0108 20:49:53.674015   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:49:53.674027   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.674035   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.674040   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.675682   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:49:53.675696   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.675702   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.675708   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.675712   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.675718   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.675723   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.675745   35097 round_trippers.go:580]     Audit-Id: 40b3d7e8-4ca2-425a-92ec-a95080a8d9b2
	I0108 20:49:53.675860   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99","resourceVersion":"788","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_41_37_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0108 20:49:53.676119   35097 pod_ready.go:92] pod "kube-proxy-j5w6d" in "kube-system" namespace has status "Ready":"True"
	I0108 20:49:53.676134   35097 pod_ready.go:81] duration metric: took 4.457874ms waiting for pod "kube-proxy-j5w6d" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:53.676145   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lxkrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:53.847391   35097 request.go:629] Waited for 171.165089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lxkrv
	I0108 20:49:53.847449   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lxkrv
	I0108 20:49:53.847456   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:53.847463   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:53.847470   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:53.850638   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:53.850663   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:53.850673   35097 round_trippers.go:580]     Audit-Id: ad1291a4-9fbc-407d-b4ba-e89637de9d22
	I0108 20:49:53.850682   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:53.850690   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:53.850698   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:53.850714   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:53.850722   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:53 GMT
	I0108 20:49:53.850878   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lxkrv","generateName":"kube-proxy-","namespace":"kube-system","uid":"d7fed398-b2ff-4ec4-a1a6-d0a7b8dca989","resourceVersion":"739","creationTimestamp":"2024-01-08T20:40:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:40:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 20:49:54.047808   35097 request.go:629] Waited for 196.402021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:49:54.047901   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:49:54.047908   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:54.047936   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:54.047954   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:54.050592   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:54.050618   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:54.050628   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:54.050636   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:54.050644   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:54 GMT
	I0108 20:49:54.050652   35097 round_trippers.go:580]     Audit-Id: 56900a63-e31b-4764-a08b-62ff6590789e
	I0108 20:49:54.050662   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:54.050671   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:54.050787   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m03","uid":"f402a58c-763c-4188-b0f9-533674f03d66","resourceVersion":"909","creationTimestamp":"2024-01-08T20:41:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_41_37_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:41:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0108 20:49:54.051098   35097 pod_ready.go:92] pod "kube-proxy-lxkrv" in "kube-system" namespace has status "Ready":"True"
	I0108 20:49:54.051116   35097 pod_ready.go:81] duration metric: took 374.960477ms waiting for pod "kube-proxy-lxkrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:54.051125   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:54.248172   35097 request.go:629] Waited for 196.99172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9xrv
	I0108 20:49:54.248258   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9xrv
	I0108 20:49:54.248266   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:54.248276   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:54.248286   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:54.251301   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:54.251329   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:54.251338   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:54.251346   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:54.251353   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:54 GMT
	I0108 20:49:54.251360   35097 round_trippers.go:580]     Audit-Id: fb416c12-872a-474a-bd4c-5a4d47fa19b6
	I0108 20:49:54.251367   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:54.251373   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:54.251609   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z9xrv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a0843325-2adf-4c2f-8489-067554648b52","resourceVersion":"810","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 20:49:54.447319   35097 request.go:629] Waited for 195.294918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:54.447401   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:54.447406   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:54.447414   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:54.447421   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:54.450665   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:54.450685   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:54.450699   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:54.450706   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:54.450714   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:54.450722   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:54.450730   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:54 GMT
	I0108 20:49:54.450747   35097 round_trippers.go:580]     Audit-Id: b9c937cb-e206-4e03-a1d6-7dee2594704b
	I0108 20:49:54.451032   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:54.451368   35097 pod_ready.go:92] pod "kube-proxy-z9xrv" in "kube-system" namespace has status "Ready":"True"
	I0108 20:49:54.451385   35097 pod_ready.go:81] duration metric: took 400.255154ms waiting for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:54.451394   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:54.647458   35097 request.go:629] Waited for 195.969969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:49:54.647567   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:49:54.647582   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:54.647595   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:54.647608   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:54.650760   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:54.650782   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:54.650792   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:54.650801   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:54 GMT
	I0108 20:49:54.650807   35097 round_trippers.go:580]     Audit-Id: fbd486cd-b79e-4f58-bd56-9c1cbac7f18c
	I0108 20:49:54.650815   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:54.650824   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:54.650833   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:54.651026   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-340815","namespace":"kube-system","uid":"008c4fe8-78b1-4326-8452-215037af26d6","resourceVersion":"888","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.mirror":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.seen":"2024-01-08T20:38:05.870865233Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0108 20:49:54.847785   35097 request.go:629] Waited for 196.30273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:54.847864   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:49:54.847871   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:54.847887   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:54.847898   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:54.850678   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:49:54.850698   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:54.850707   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:54.850714   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:54.850755   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:54.850766   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:54.850776   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:54 GMT
	I0108 20:49:54.850789   35097 round_trippers.go:580]     Audit-Id: cc51688c-7402-4e7b-bcb4-7c2d98b01439
	I0108 20:49:54.851039   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0108 20:49:54.851443   35097 pod_ready.go:92] pod "kube-scheduler-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:49:54.851464   35097 pod_ready.go:81] duration metric: took 400.044054ms waiting for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:49:54.851479   35097 pod_ready.go:38] duration metric: took 9.214677295s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:49:54.851502   35097 api_server.go:52] waiting for apiserver process to appear ...
	I0108 20:49:54.851558   35097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:49:54.864946   35097 command_runner.go:130] > 1109
	I0108 20:49:54.864986   35097 api_server.go:72] duration metric: took 14.823531059s to wait for apiserver process to appear ...
	I0108 20:49:54.864996   35097 api_server.go:88] waiting for apiserver healthz status ...
	I0108 20:49:54.865015   35097 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0108 20:49:54.869969   35097 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0108 20:49:54.870045   35097 round_trippers.go:463] GET https://192.168.39.196:8443/version
	I0108 20:49:54.870052   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:54.870061   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:54.870075   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:54.871216   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:49:54.871231   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:54.871238   35097 round_trippers.go:580]     Audit-Id: 3d714a93-3e53-4d71-823c-40226fe68af1
	I0108 20:49:54.871243   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:54.871248   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:54.871253   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:54.871258   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:54.871263   35097 round_trippers.go:580]     Content-Length: 264
	I0108 20:49:54.871268   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:54 GMT
	I0108 20:49:54.871406   35097 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 20:49:54.871457   35097 api_server.go:141] control plane version: v1.28.4
	I0108 20:49:54.871476   35097 api_server.go:131] duration metric: took 6.470482ms to wait for apiserver health ...
	I0108 20:49:54.871483   35097 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 20:49:55.047930   35097 request.go:629] Waited for 176.374094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:49:55.047986   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:49:55.047991   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:55.048012   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:55.048019   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:55.052556   35097 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:49:55.052584   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:55.052595   35097 round_trippers.go:580]     Audit-Id: 28cc9e46-a6c4-4313-b915-f4e63d730268
	I0108 20:49:55.052603   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:55.052611   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:55.052619   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:55.052628   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:55.052636   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:55 GMT
	I0108 20:49:55.053994   35097 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"928"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"924","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I0108 20:49:55.056389   35097 system_pods.go:59] 12 kube-system pods found
	I0108 20:49:55.056412   35097 system_pods.go:61] "coredns-5dd5756b68-h4v6v" [5c1ccbb8-1747-4b6f-b40c-c54670e49d54] Running
	I0108 20:49:55.056420   35097 system_pods.go:61] "etcd-multinode-340815" [c6d1e2c4-6dbc-4495-ac68-c4b030195c2c] Running
	I0108 20:49:55.056426   35097 system_pods.go:61] "kindnet-h48qs" [65d532d3-b3ca-493d-b287-1b03dbdad538] Running
	I0108 20:49:55.056436   35097 system_pods.go:61] "kindnet-tqjx8" [cb8397d0-fc25-459f-9ed2-aacb628f0e59] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 20:49:55.056451   35097 system_pods.go:61] "kindnet-wfgln" [67bb4772-2e5d-489d-93c5-df2a7254be5d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 20:49:55.056460   35097 system_pods.go:61] "kube-apiserver-multinode-340815" [523b3dcf-2fae-43b4-a9c6-cd2337ae6d6f] Running
	I0108 20:49:55.056465   35097 system_pods.go:61] "kube-controller-manager-multinode-340815" [3b29ca3f-d23b-4add-a5fb-d59381398862] Running
	I0108 20:49:55.056472   35097 system_pods.go:61] "kube-proxy-j5w6d" [61568130-b69e-48ce-86f0-9a9e63ed99ab] Running
	I0108 20:49:55.056476   35097 system_pods.go:61] "kube-proxy-lxkrv" [d7fed398-b2ff-4ec4-a1a6-d0a7b8dca989] Running
	I0108 20:49:55.056481   35097 system_pods.go:61] "kube-proxy-z9xrv" [a0843325-2adf-4c2f-8489-067554648b52] Running
	I0108 20:49:55.056485   35097 system_pods.go:61] "kube-scheduler-multinode-340815" [008c4fe8-78b1-4326-8452-215037af26d6] Running
	I0108 20:49:55.056491   35097 system_pods.go:61] "storage-provisioner" [de357297-4bd9-4c71-ada5-ceace0d38cfb] Running
	I0108 20:49:55.056500   35097 system_pods.go:74] duration metric: took 185.01228ms to wait for pod list to return data ...
	I0108 20:49:55.056509   35097 default_sa.go:34] waiting for default service account to be created ...
	I0108 20:49:55.247929   35097 request.go:629] Waited for 191.339304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:49:55.248004   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0108 20:49:55.248010   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:55.248020   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:55.248029   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:55.251703   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:55.251726   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:55.251735   35097 round_trippers.go:580]     Audit-Id: efea1fb5-4bf9-4b9f-8586-1652336fbd62
	I0108 20:49:55.251744   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:55.251753   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:55.251761   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:55.251769   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:55.251777   35097 round_trippers.go:580]     Content-Length: 261
	I0108 20:49:55.251786   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:55 GMT
	I0108 20:49:55.251849   35097 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"928"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"760bcece-5b51-45a3-9d4c-77490cf0e377","resourceVersion":"295","creationTimestamp":"2024-01-08T20:38:17Z"}}]}
	I0108 20:49:55.252021   35097 default_sa.go:45] found service account: "default"
	I0108 20:49:55.252038   35097 default_sa.go:55] duration metric: took 195.521011ms for default service account to be created ...
	I0108 20:49:55.252074   35097 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 20:49:55.447552   35097 request.go:629] Waited for 195.386311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:49:55.447627   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:49:55.447635   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:55.447646   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:55.447661   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:55.455812   35097 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 20:49:55.455833   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:55.455840   35097 round_trippers.go:580]     Audit-Id: 41b5dc96-2cc8-4dd6-9014-e338d57367c7
	I0108 20:49:55.455850   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:55.455856   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:55.455863   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:55.455871   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:55.455879   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:55 GMT
	I0108 20:49:55.456907   35097 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"928"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"924","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I0108 20:49:55.459333   35097 system_pods.go:86] 12 kube-system pods found
	I0108 20:49:55.459358   35097 system_pods.go:89] "coredns-5dd5756b68-h4v6v" [5c1ccbb8-1747-4b6f-b40c-c54670e49d54] Running
	I0108 20:49:55.459363   35097 system_pods.go:89] "etcd-multinode-340815" [c6d1e2c4-6dbc-4495-ac68-c4b030195c2c] Running
	I0108 20:49:55.459367   35097 system_pods.go:89] "kindnet-h48qs" [65d532d3-b3ca-493d-b287-1b03dbdad538] Running
	I0108 20:49:55.459375   35097 system_pods.go:89] "kindnet-tqjx8" [cb8397d0-fc25-459f-9ed2-aacb628f0e59] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 20:49:55.459381   35097 system_pods.go:89] "kindnet-wfgln" [67bb4772-2e5d-489d-93c5-df2a7254be5d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0108 20:49:55.459387   35097 system_pods.go:89] "kube-apiserver-multinode-340815" [523b3dcf-2fae-43b4-a9c6-cd2337ae6d6f] Running
	I0108 20:49:55.459394   35097 system_pods.go:89] "kube-controller-manager-multinode-340815" [3b29ca3f-d23b-4add-a5fb-d59381398862] Running
	I0108 20:49:55.459401   35097 system_pods.go:89] "kube-proxy-j5w6d" [61568130-b69e-48ce-86f0-9a9e63ed99ab] Running
	I0108 20:49:55.459410   35097 system_pods.go:89] "kube-proxy-lxkrv" [d7fed398-b2ff-4ec4-a1a6-d0a7b8dca989] Running
	I0108 20:49:55.459416   35097 system_pods.go:89] "kube-proxy-z9xrv" [a0843325-2adf-4c2f-8489-067554648b52] Running
	I0108 20:49:55.459437   35097 system_pods.go:89] "kube-scheduler-multinode-340815" [008c4fe8-78b1-4326-8452-215037af26d6] Running
	I0108 20:49:55.459445   35097 system_pods.go:89] "storage-provisioner" [de357297-4bd9-4c71-ada5-ceace0d38cfb] Running
	I0108 20:49:55.459452   35097 system_pods.go:126] duration metric: took 207.371826ms to wait for k8s-apps to be running ...
	I0108 20:49:55.459457   35097 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:49:55.459505   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:49:55.492413   35097 system_svc.go:56] duration metric: took 32.943835ms WaitForService to wait for kubelet.
	I0108 20:49:55.492445   35097 kubeadm.go:581] duration metric: took 15.450992016s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:49:55.492463   35097 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:49:55.647881   35097 request.go:629] Waited for 155.349106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0108 20:49:55.647955   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0108 20:49:55.647965   35097 round_trippers.go:469] Request Headers:
	I0108 20:49:55.647976   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:49:55.647990   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:49:55.651396   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:49:55.651422   35097 round_trippers.go:577] Response Headers:
	I0108 20:49:55.651430   35097 round_trippers.go:580]     Audit-Id: 02d05f6f-10dc-4e1b-97a7-6d561f7bf852
	I0108 20:49:55.651435   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:49:55.651440   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:49:55.651445   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:49:55.651450   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:49:55.651456   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:49:55 GMT
	I0108 20:49:55.651662   35097 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"929"},"items":[{"metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"894","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16179 chars]
	I0108 20:49:55.652441   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:49:55.652464   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:49:55.652473   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:49:55.652477   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:49:55.652481   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:49:55.652484   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:49:55.652488   35097 node_conditions.go:105] duration metric: took 160.021992ms to run NodePressure ...
	I0108 20:49:55.652498   35097 start.go:228] waiting for startup goroutines ...
	I0108 20:49:55.652505   35097 start.go:233] waiting for cluster config update ...
	I0108 20:49:55.652513   35097 start.go:242] writing updated cluster config ...
	I0108 20:49:55.652957   35097 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:49:55.653038   35097 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json ...
	I0108 20:49:55.655922   35097 out.go:177] * Starting worker node multinode-340815-m02 in cluster multinode-340815
	I0108 20:49:55.657493   35097 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:49:55.657516   35097 cache.go:56] Caching tarball of preloaded images
	I0108 20:49:55.657612   35097 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 20:49:55.657623   35097 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:49:55.657705   35097 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json ...
	I0108 20:49:55.658450   35097 start.go:365] acquiring machines lock for multinode-340815-m02: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 20:49:55.658525   35097 start.go:369] acquired machines lock for "multinode-340815-m02" in 31.04µs
	I0108 20:49:55.658539   35097 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:49:55.658550   35097 fix.go:54] fixHost starting: m02
	I0108 20:49:55.658856   35097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:49:55.658880   35097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:49:55.673609   35097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I0108 20:49:55.674091   35097 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:49:55.674510   35097 main.go:141] libmachine: Using API Version  1
	I0108 20:49:55.674529   35097 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:49:55.674860   35097 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:49:55.675000   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:49:55.675160   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetState
	I0108 20:49:55.676839   35097 fix.go:102] recreateIfNeeded on multinode-340815-m02: state=Running err=<nil>
	W0108 20:49:55.676856   35097 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:49:55.679021   35097 out.go:177] * Updating the running kvm2 "multinode-340815-m02" VM ...
	I0108 20:49:55.680275   35097 machine.go:88] provisioning docker machine ...
	I0108 20:49:55.680308   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:49:55.680600   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetMachineName
	I0108 20:49:55.680798   35097 buildroot.go:166] provisioning hostname "multinode-340815-m02"
	I0108 20:49:55.680821   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetMachineName
	I0108 20:49:55.680990   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:49:55.683756   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:55.684219   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:49:55.684289   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:55.684454   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:49:55.684649   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:49:55.684805   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:49:55.684948   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:49:55.685168   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:55.685613   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0108 20:49:55.685632   35097 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-340815-m02 && echo "multinode-340815-m02" | sudo tee /etc/hostname
	I0108 20:49:55.838546   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-340815-m02
	
	I0108 20:49:55.838579   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:49:55.841506   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:55.841883   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:49:55.841914   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:55.842107   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:49:55.842274   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:49:55.842445   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:49:55.842579   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:49:55.842732   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:55.843177   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0108 20:49:55.843206   35097 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-340815-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-340815-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-340815-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:49:55.969200   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:49:55.969230   35097 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 20:49:55.969244   35097 buildroot.go:174] setting up certificates
	I0108 20:49:55.969253   35097 provision.go:83] configureAuth start
	I0108 20:49:55.969261   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetMachineName
	I0108 20:49:55.969503   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetIP
	I0108 20:49:55.972129   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:55.972614   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:49:55.972638   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:55.972787   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:49:55.975063   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:55.975398   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:49:55.975432   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:55.975530   35097 provision.go:138] copyHostCerts
	I0108 20:49:55.975554   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:49:55.975583   35097 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 20:49:55.975591   35097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:49:55.975652   35097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 20:49:55.975720   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:49:55.975738   35097 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 20:49:55.975744   35097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:49:55.975767   35097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 20:49:55.975808   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:49:55.975824   35097 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 20:49:55.975831   35097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:49:55.975849   35097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 20:49:55.975894   35097 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.multinode-340815-m02 san=[192.168.39.78 192.168.39.78 localhost 127.0.0.1 minikube multinode-340815-m02]
	I0108 20:49:56.069389   35097 provision.go:172] copyRemoteCerts
	I0108 20:49:56.069441   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:49:56.069463   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:49:56.072109   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:56.072449   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:49:56.072478   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:56.072654   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:49:56.072809   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:49:56.072964   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:49:56.073066   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa Username:docker}
	I0108 20:49:56.167346   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:49:56.167432   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:49:56.190791   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:49:56.190865   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 20:49:56.215041   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:49:56.215127   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:49:56.239128   35097 provision.go:86] duration metric: configureAuth took 269.856516ms
	I0108 20:49:56.239163   35097 buildroot.go:189] setting minikube options for container-runtime
	I0108 20:49:56.239429   35097 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:49:56.239511   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:49:56.242092   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:56.242378   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:49:56.242409   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:49:56.242520   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:49:56.242674   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:49:56.242847   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:49:56.242960   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:49:56.243090   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:49:56.243385   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0108 20:49:56.243399   35097 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:51:26.904190   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:51:26.904220   35097 machine.go:91] provisioned docker machine in 1m31.223924446s
	I0108 20:51:26.904234   35097 start.go:300] post-start starting for "multinode-340815-m02" (driver="kvm2")
	I0108 20:51:26.904247   35097 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:51:26.904264   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:51:26.904607   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:51:26.904633   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:51:26.907588   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:26.908038   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:51:26.908072   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:26.908235   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:51:26.908444   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:51:26.908597   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:51:26.908708   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa Username:docker}
	I0108 20:51:27.001988   35097 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:51:27.006674   35097 command_runner.go:130] > NAME=Buildroot
	I0108 20:51:27.006692   35097 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 20:51:27.006696   35097 command_runner.go:130] > ID=buildroot
	I0108 20:51:27.006702   35097 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 20:51:27.006706   35097 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 20:51:27.006732   35097 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 20:51:27.006752   35097 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 20:51:27.006831   35097 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 20:51:27.006929   35097 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 20:51:27.006940   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /etc/ssl/certs/178962.pem
	I0108 20:51:27.007038   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:51:27.015406   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:51:27.040366   35097 start.go:303] post-start completed in 136.116083ms
	I0108 20:51:27.040387   35097 fix.go:56] fixHost completed within 1m31.381837119s
	I0108 20:51:27.040406   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:51:27.042845   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:27.043332   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:51:27.043361   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:27.043542   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:51:27.043770   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:51:27.043906   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:51:27.044040   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:51:27.044211   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:51:27.044536   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0108 20:51:27.044551   35097 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 20:51:27.172928   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704747087.166540240
	
	I0108 20:51:27.172950   35097 fix.go:206] guest clock: 1704747087.166540240
	I0108 20:51:27.172960   35097 fix.go:219] Guest: 2024-01-08 20:51:27.16654024 +0000 UTC Remote: 2024-01-08 20:51:27.040390259 +0000 UTC m=+455.577867793 (delta=126.149981ms)
	I0108 20:51:27.172976   35097 fix.go:190] guest clock delta is within tolerance: 126.149981ms
	I0108 20:51:27.172981   35097 start.go:83] releasing machines lock for "multinode-340815-m02", held for 1m31.514448482s
	I0108 20:51:27.173001   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:51:27.173256   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetIP
	I0108 20:51:27.176025   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:27.176462   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:51:27.176492   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:27.178751   35097 out.go:177] * Found network options:
	I0108 20:51:27.180153   35097 out.go:177]   - NO_PROXY=192.168.39.196
	W0108 20:51:27.181324   35097 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 20:51:27.181351   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:51:27.181912   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:51:27.182074   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:51:27.182151   35097 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:51:27.182192   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	W0108 20:51:27.182280   35097 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 20:51:27.182356   35097 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:51:27.182381   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:51:27.185044   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:27.185452   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:27.185510   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:51:27.185537   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:27.185722   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:51:27.185898   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:51:27.185989   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:51:27.186023   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:27.186074   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:51:27.186257   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:51:27.186276   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa Username:docker}
	I0108 20:51:27.186406   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:51:27.186548   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:51:27.186686   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa Username:docker}
	I0108 20:51:27.426745   35097 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 20:51:27.426863   35097 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:51:27.433510   35097 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 20:51:27.433675   35097 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 20:51:27.433743   35097 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:51:27.442876   35097 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 20:51:27.442900   35097 start.go:475] detecting cgroup driver to use...
	I0108 20:51:27.442971   35097 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:51:27.457144   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:51:27.469673   35097 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:51:27.469725   35097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:51:27.483659   35097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:51:27.498016   35097 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:51:27.642204   35097 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:51:27.780668   35097 docker.go:233] disabling docker service ...
	I0108 20:51:27.780726   35097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:51:27.798732   35097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:51:27.813010   35097 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:51:27.942173   35097 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:51:28.070811   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:51:28.083702   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:51:28.102069   35097 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 20:51:28.102111   35097 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:51:28.102163   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:51:28.112852   35097 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:51:28.112929   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:51:28.123616   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:51:28.133987   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:51:28.145580   35097 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:51:28.156621   35097 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:51:28.165609   35097 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 20:51:28.165722   35097 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:51:28.174651   35097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:51:28.300848   35097 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:51:36.990992   35097 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.690101751s)
	I0108 20:51:36.991020   35097 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:51:36.991065   35097 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:51:36.996075   35097 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 20:51:36.996111   35097 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 20:51:36.996121   35097 command_runner.go:130] > Device: 16h/22d	Inode: 1268        Links: 1
	I0108 20:51:36.996131   35097 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:51:36.996138   35097 command_runner.go:130] > Access: 2024-01-08 20:51:36.914271017 +0000
	I0108 20:51:36.996146   35097 command_runner.go:130] > Modify: 2024-01-08 20:51:36.914271017 +0000
	I0108 20:51:36.996159   35097 command_runner.go:130] > Change: 2024-01-08 20:51:36.914271017 +0000
	I0108 20:51:36.996166   35097 command_runner.go:130] >  Birth: -
	I0108 20:51:36.996215   35097 start.go:543] Will wait 60s for crictl version
	I0108 20:51:36.996266   35097 ssh_runner.go:195] Run: which crictl
	I0108 20:51:36.999850   35097 command_runner.go:130] > /usr/bin/crictl
	I0108 20:51:36.999900   35097 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:51:37.040186   35097 command_runner.go:130] > Version:  0.1.0
	I0108 20:51:37.040211   35097 command_runner.go:130] > RuntimeName:  cri-o
	I0108 20:51:37.040216   35097 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 20:51:37.040221   35097 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 20:51:37.041246   35097 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 20:51:37.041309   35097 ssh_runner.go:195] Run: crio --version
	I0108 20:51:37.087930   35097 command_runner.go:130] > crio version 1.24.1
	I0108 20:51:37.087953   35097 command_runner.go:130] > Version:          1.24.1
	I0108 20:51:37.087960   35097 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 20:51:37.087965   35097 command_runner.go:130] > GitTreeState:     dirty
	I0108 20:51:37.087973   35097 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 20:51:37.087978   35097 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 20:51:37.087982   35097 command_runner.go:130] > Compiler:         gc
	I0108 20:51:37.087987   35097 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:51:37.087992   35097 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:51:37.087999   35097 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:51:37.088003   35097 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:51:37.088007   35097 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:51:37.088112   35097 ssh_runner.go:195] Run: crio --version
	I0108 20:51:37.139557   35097 command_runner.go:130] > crio version 1.24.1
	I0108 20:51:37.139579   35097 command_runner.go:130] > Version:          1.24.1
	I0108 20:51:37.139586   35097 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 20:51:37.139591   35097 command_runner.go:130] > GitTreeState:     dirty
	I0108 20:51:37.139602   35097 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 20:51:37.139610   35097 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 20:51:37.139618   35097 command_runner.go:130] > Compiler:         gc
	I0108 20:51:37.139626   35097 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:51:37.139633   35097 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:51:37.139640   35097 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:51:37.139644   35097 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:51:37.139648   35097 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:51:37.143283   35097 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 20:51:37.144805   35097 out.go:177]   - env NO_PROXY=192.168.39.196
	I0108 20:51:37.146267   35097 main.go:141] libmachine: (multinode-340815-m02) Calling .GetIP
	I0108 20:51:37.148986   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:37.149361   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:51:37.149390   35097 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:51:37.149539   35097 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 20:51:37.154747   35097 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0108 20:51:37.154927   35097 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815 for IP: 192.168.39.78
	I0108 20:51:37.154949   35097 certs.go:190] acquiring lock for shared ca certs: {Name:mke01aa9d73e320a9a3907677cf29c75f0fa86d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:51:37.155091   35097 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key
	I0108 20:51:37.155128   35097 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key
	I0108 20:51:37.155137   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:51:37.155148   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:51:37.155157   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:51:37.155169   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:51:37.155220   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem (1338 bytes)
	W0108 20:51:37.155280   35097 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896_empty.pem, impossibly tiny 0 bytes
	I0108 20:51:37.155291   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:51:37.155314   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem (1082 bytes)
	I0108 20:51:37.155337   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:51:37.155366   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem (1675 bytes)
	I0108 20:51:37.155403   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:51:37.155429   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:51:37.155441   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem -> /usr/share/ca-certificates/17896.pem
	I0108 20:51:37.155452   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /usr/share/ca-certificates/178962.pem
	I0108 20:51:37.155783   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:51:37.180561   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:51:37.203888   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:51:37.227629   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:51:37.251902   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:51:37.275947   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem --> /usr/share/ca-certificates/17896.pem (1338 bytes)
	I0108 20:51:37.299778   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /usr/share/ca-certificates/178962.pem (1708 bytes)
	I0108 20:51:37.325226   35097 ssh_runner.go:195] Run: openssl version
	I0108 20:51:37.331146   35097 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 20:51:37.331374   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:51:37.341633   35097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:51:37.346779   35097 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:51:37.346813   35097 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:51:37.346871   35097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:51:37.352757   35097 command_runner.go:130] > b5213941
	I0108 20:51:37.352977   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:51:37.362048   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17896.pem && ln -fs /usr/share/ca-certificates/17896.pem /etc/ssl/certs/17896.pem"
	I0108 20:51:37.372369   35097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17896.pem
	I0108 20:51:37.377365   35097 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 20:51:37.377400   35097 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 20:51:37.377455   35097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17896.pem
	I0108 20:51:37.382926   35097 command_runner.go:130] > 51391683
	I0108 20:51:37.383213   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17896.pem /etc/ssl/certs/51391683.0"
	I0108 20:51:37.391978   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178962.pem && ln -fs /usr/share/ca-certificates/178962.pem /etc/ssl/certs/178962.pem"
	I0108 20:51:37.402527   35097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178962.pem
	I0108 20:51:37.407384   35097 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 20:51:37.407599   35097 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 20:51:37.407642   35097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178962.pem
	I0108 20:51:37.413035   35097 command_runner.go:130] > 3ec20f2e
	I0108 20:51:37.413337   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178962.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:51:37.422883   35097 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:51:37.427679   35097 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:51:37.427877   35097 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:51:37.427977   35097 ssh_runner.go:195] Run: crio config
	I0108 20:51:37.478126   35097 command_runner.go:130] ! time="2024-01-08 20:51:37.471881430Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 20:51:37.478263   35097 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 20:51:37.484979   35097 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 20:51:37.485001   35097 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 20:51:37.485013   35097 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 20:51:37.485016   35097 command_runner.go:130] > #
	I0108 20:51:37.485023   35097 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 20:51:37.485029   35097 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 20:51:37.485035   35097 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 20:51:37.485044   35097 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 20:51:37.485048   35097 command_runner.go:130] > # reload'.
	I0108 20:51:37.485054   35097 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 20:51:37.485064   35097 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 20:51:37.485071   35097 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 20:51:37.485082   35097 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 20:51:37.485087   35097 command_runner.go:130] > [crio]
	I0108 20:51:37.485097   35097 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 20:51:37.485108   35097 command_runner.go:130] > # containers images, in this directory.
	I0108 20:51:37.485114   35097 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 20:51:37.485128   35097 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 20:51:37.485139   35097 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 20:51:37.485153   35097 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 20:51:37.485168   35097 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 20:51:37.485179   35097 command_runner.go:130] > storage_driver = "overlay"
	I0108 20:51:37.485190   35097 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 20:51:37.485202   35097 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 20:51:37.485212   35097 command_runner.go:130] > storage_option = [
	I0108 20:51:37.485233   35097 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 20:51:37.485242   35097 command_runner.go:130] > ]
	I0108 20:51:37.485251   35097 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 20:51:37.485265   35097 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 20:51:37.485276   35097 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 20:51:37.485287   35097 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 20:51:37.485298   35097 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 20:51:37.485308   35097 command_runner.go:130] > # always happen on a node reboot
	I0108 20:51:37.485318   35097 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 20:51:37.485329   35097 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 20:51:37.485339   35097 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 20:51:37.485354   35097 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 20:51:37.485365   35097 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 20:51:37.485382   35097 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 20:51:37.485398   35097 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 20:51:37.485407   35097 command_runner.go:130] > # internal_wipe = true
	I0108 20:51:37.485419   35097 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 20:51:37.485430   35097 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 20:51:37.485441   35097 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 20:51:37.485453   35097 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 20:51:37.485462   35097 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 20:51:37.485468   35097 command_runner.go:130] > [crio.api]
	I0108 20:51:37.485474   35097 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 20:51:37.485482   35097 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 20:51:37.485490   35097 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 20:51:37.485497   35097 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 20:51:37.485504   35097 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 20:51:37.485511   35097 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 20:51:37.485518   35097 command_runner.go:130] > # stream_port = "0"
	I0108 20:51:37.485524   35097 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 20:51:37.485530   35097 command_runner.go:130] > # stream_enable_tls = false
	I0108 20:51:37.485537   35097 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 20:51:37.485543   35097 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 20:51:37.485550   35097 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 20:51:37.485558   35097 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 20:51:37.485564   35097 command_runner.go:130] > # minutes.
	I0108 20:51:37.485569   35097 command_runner.go:130] > # stream_tls_cert = ""
	I0108 20:51:37.485577   35097 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 20:51:37.485583   35097 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 20:51:37.485589   35097 command_runner.go:130] > # stream_tls_key = ""
	I0108 20:51:37.485595   35097 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 20:51:37.485604   35097 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 20:51:37.485611   35097 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 20:51:37.485619   35097 command_runner.go:130] > # stream_tls_ca = ""
	I0108 20:51:37.485626   35097 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:51:37.485633   35097 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 20:51:37.485640   35097 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:51:37.485646   35097 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 20:51:37.485661   35097 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 20:51:37.485670   35097 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 20:51:37.485676   35097 command_runner.go:130] > [crio.runtime]
	I0108 20:51:37.485682   35097 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 20:51:37.485692   35097 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 20:51:37.485698   35097 command_runner.go:130] > # "nofile=1024:2048"
	I0108 20:51:37.485717   35097 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 20:51:37.485726   35097 command_runner.go:130] > # default_ulimits = [
	I0108 20:51:37.485730   35097 command_runner.go:130] > # ]
	I0108 20:51:37.485736   35097 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 20:51:37.485739   35097 command_runner.go:130] > # no_pivot = false
	I0108 20:51:37.485745   35097 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 20:51:37.485754   35097 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 20:51:37.485761   35097 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 20:51:37.485767   35097 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 20:51:37.485774   35097 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 20:51:37.485780   35097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:51:37.485787   35097 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 20:51:37.485792   35097 command_runner.go:130] > # Cgroup setting for conmon
	I0108 20:51:37.485799   35097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 20:51:37.485806   35097 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 20:51:37.485812   35097 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 20:51:37.485819   35097 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 20:51:37.485826   35097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:51:37.485832   35097 command_runner.go:130] > conmon_env = [
	I0108 20:51:37.485838   35097 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 20:51:37.485844   35097 command_runner.go:130] > ]
	I0108 20:51:37.485850   35097 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 20:51:37.485857   35097 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 20:51:37.485865   35097 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 20:51:37.485872   35097 command_runner.go:130] > # default_env = [
	I0108 20:51:37.485876   35097 command_runner.go:130] > # ]
	I0108 20:51:37.485884   35097 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 20:51:37.485888   35097 command_runner.go:130] > # selinux = false
	I0108 20:51:37.485896   35097 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 20:51:37.485904   35097 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 20:51:37.485912   35097 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 20:51:37.485919   35097 command_runner.go:130] > # seccomp_profile = ""
	I0108 20:51:37.485925   35097 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 20:51:37.485932   35097 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 20:51:37.485939   35097 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 20:51:37.485946   35097 command_runner.go:130] > # which might increase security.
	I0108 20:51:37.485951   35097 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 20:51:37.485959   35097 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 20:51:37.485967   35097 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 20:51:37.485975   35097 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 20:51:37.485981   35097 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 20:51:37.485988   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:51:37.485993   35097 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 20:51:37.486001   35097 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 20:51:37.486006   35097 command_runner.go:130] > # the cgroup blockio controller.
	I0108 20:51:37.486013   35097 command_runner.go:130] > # blockio_config_file = ""
	I0108 20:51:37.486019   35097 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 20:51:37.486025   35097 command_runner.go:130] > # irqbalance daemon.
	I0108 20:51:37.486031   35097 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 20:51:37.486039   35097 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 20:51:37.486047   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:51:37.486054   35097 command_runner.go:130] > # rdt_config_file = ""
	I0108 20:51:37.486059   35097 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 20:51:37.486066   35097 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 20:51:37.486072   35097 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 20:51:37.486078   35097 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 20:51:37.486084   35097 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 20:51:37.486096   35097 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 20:51:37.486105   35097 command_runner.go:130] > # will be added.
	I0108 20:51:37.486115   35097 command_runner.go:130] > # default_capabilities = [
	I0108 20:51:37.486124   35097 command_runner.go:130] > # 	"CHOWN",
	I0108 20:51:37.486133   35097 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 20:51:37.486141   35097 command_runner.go:130] > # 	"FSETID",
	I0108 20:51:37.486148   35097 command_runner.go:130] > # 	"FOWNER",
	I0108 20:51:37.486157   35097 command_runner.go:130] > # 	"SETGID",
	I0108 20:51:37.486166   35097 command_runner.go:130] > # 	"SETUID",
	I0108 20:51:37.486172   35097 command_runner.go:130] > # 	"SETPCAP",
	I0108 20:51:37.486182   35097 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 20:51:37.486189   35097 command_runner.go:130] > # 	"KILL",
	I0108 20:51:37.486197   35097 command_runner.go:130] > # ]
	I0108 20:51:37.486206   35097 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 20:51:37.486214   35097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:51:37.486226   35097 command_runner.go:130] > # default_sysctls = [
	I0108 20:51:37.486229   35097 command_runner.go:130] > # ]
	I0108 20:51:37.486237   35097 command_runner.go:130] > # List of devices on the host that a
	I0108 20:51:37.486243   35097 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 20:51:37.486250   35097 command_runner.go:130] > # allowed_devices = [
	I0108 20:51:37.486254   35097 command_runner.go:130] > # 	"/dev/fuse",
	I0108 20:51:37.486259   35097 command_runner.go:130] > # ]
	I0108 20:51:37.486264   35097 command_runner.go:130] > # List of additional devices. specified as
	I0108 20:51:37.486273   35097 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 20:51:37.486281   35097 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 20:51:37.486300   35097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:51:37.486306   35097 command_runner.go:130] > # additional_devices = [
	I0108 20:51:37.486310   35097 command_runner.go:130] > # ]
	I0108 20:51:37.486315   35097 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 20:51:37.486321   35097 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 20:51:37.486325   35097 command_runner.go:130] > # 	"/etc/cdi",
	I0108 20:51:37.486331   35097 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 20:51:37.486335   35097 command_runner.go:130] > # ]
	I0108 20:51:37.486343   35097 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 20:51:37.486351   35097 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 20:51:37.486357   35097 command_runner.go:130] > # Defaults to false.
	I0108 20:51:37.486363   35097 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 20:51:37.486371   35097 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 20:51:37.486379   35097 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 20:51:37.486385   35097 command_runner.go:130] > # hooks_dir = [
	I0108 20:51:37.486390   35097 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 20:51:37.486396   35097 command_runner.go:130] > # ]
	I0108 20:51:37.486403   35097 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 20:51:37.486412   35097 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 20:51:37.486419   35097 command_runner.go:130] > # its default mounts from the following two files:
	I0108 20:51:37.486423   35097 command_runner.go:130] > #
	I0108 20:51:37.486430   35097 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 20:51:37.486438   35097 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 20:51:37.486446   35097 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 20:51:37.486452   35097 command_runner.go:130] > #
	I0108 20:51:37.486458   35097 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 20:51:37.486466   35097 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 20:51:37.486474   35097 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 20:51:37.486481   35097 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 20:51:37.486487   35097 command_runner.go:130] > #
	I0108 20:51:37.486491   35097 command_runner.go:130] > # default_mounts_file = ""
	I0108 20:51:37.486499   35097 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 20:51:37.486507   35097 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 20:51:37.486514   35097 command_runner.go:130] > pids_limit = 1024
	I0108 20:51:37.486520   35097 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 20:51:37.486528   35097 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 20:51:37.486536   35097 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 20:51:37.486544   35097 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 20:51:37.486550   35097 command_runner.go:130] > # log_size_max = -1
	I0108 20:51:37.486557   35097 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 20:51:37.486564   35097 command_runner.go:130] > # log_to_journald = false
	I0108 20:51:37.486570   35097 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 20:51:37.486577   35097 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 20:51:37.486582   35097 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 20:51:37.486589   35097 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 20:51:37.486598   35097 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 20:51:37.486605   35097 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 20:51:37.486613   35097 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 20:51:37.486617   35097 command_runner.go:130] > # read_only = false
	I0108 20:51:37.486625   35097 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 20:51:37.486631   35097 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 20:51:37.486637   35097 command_runner.go:130] > # live configuration reload.
	I0108 20:51:37.486642   35097 command_runner.go:130] > # log_level = "info"
	I0108 20:51:37.486649   35097 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 20:51:37.486654   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:51:37.486660   35097 command_runner.go:130] > # log_filter = ""
	I0108 20:51:37.486666   35097 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 20:51:37.486681   35097 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 20:51:37.486685   35097 command_runner.go:130] > # separated by comma.
	I0108 20:51:37.486689   35097 command_runner.go:130] > # uid_mappings = ""
	I0108 20:51:37.486695   35097 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 20:51:37.486701   35097 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 20:51:37.486705   35097 command_runner.go:130] > # separated by comma.
	I0108 20:51:37.486709   35097 command_runner.go:130] > # gid_mappings = ""
	I0108 20:51:37.486714   35097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 20:51:37.486724   35097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:51:37.486732   35097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:51:37.486740   35097 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 20:51:37.486746   35097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 20:51:37.486754   35097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:51:37.486760   35097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:51:37.486766   35097 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 20:51:37.486772   35097 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 20:51:37.486781   35097 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 20:51:37.486788   35097 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 20:51:37.486795   35097 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 20:51:37.486801   35097 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 20:51:37.486809   35097 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 20:51:37.486816   35097 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 20:51:37.486821   35097 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 20:51:37.486828   35097 command_runner.go:130] > drop_infra_ctr = false
	I0108 20:51:37.486835   35097 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 20:51:37.486842   35097 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 20:51:37.486850   35097 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 20:51:37.486856   35097 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 20:51:37.486864   35097 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 20:51:37.486871   35097 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 20:51:37.486876   35097 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 20:51:37.486885   35097 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 20:51:37.486891   35097 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 20:51:37.486898   35097 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 20:51:37.486906   35097 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 20:51:37.486912   35097 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 20:51:37.486919   35097 command_runner.go:130] > # default_runtime = "runc"
	I0108 20:51:37.486925   35097 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 20:51:37.486934   35097 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 20:51:37.486946   35097 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 20:51:37.486954   35097 command_runner.go:130] > # creation as a file is not desired either.
	I0108 20:51:37.486962   35097 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 20:51:37.486969   35097 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 20:51:37.486973   35097 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 20:51:37.486978   35097 command_runner.go:130] > # ]
	I0108 20:51:37.486985   35097 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 20:51:37.486993   35097 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 20:51:37.487002   35097 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 20:51:37.487011   35097 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 20:51:37.487016   35097 command_runner.go:130] > #
	I0108 20:51:37.487021   35097 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 20:51:37.487028   35097 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 20:51:37.487032   35097 command_runner.go:130] > #  runtime_type = "oci"
	I0108 20:51:37.487038   35097 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 20:51:37.487043   35097 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 20:51:37.487050   35097 command_runner.go:130] > #  allowed_annotations = []
	I0108 20:51:37.487054   35097 command_runner.go:130] > # Where:
	I0108 20:51:37.487063   35097 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 20:51:37.487071   35097 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 20:51:37.487079   35097 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 20:51:37.487088   35097 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 20:51:37.487097   35097 command_runner.go:130] > #   in $PATH.
	I0108 20:51:37.487109   35097 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 20:51:37.487121   35097 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 20:51:37.487134   35097 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 20:51:37.487143   35097 command_runner.go:130] > #   state.
	I0108 20:51:37.487156   35097 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 20:51:37.487169   35097 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 20:51:37.487183   35097 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 20:51:37.487193   35097 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 20:51:37.487201   35097 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 20:51:37.487210   35097 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 20:51:37.487222   35097 command_runner.go:130] > #   The currently recognized values are:
	I0108 20:51:37.487231   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 20:51:37.487240   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 20:51:37.487248   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 20:51:37.487256   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 20:51:37.487265   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 20:51:37.487275   35097 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 20:51:37.487283   35097 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 20:51:37.487291   35097 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 20:51:37.487298   35097 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 20:51:37.487303   35097 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 20:51:37.487310   35097 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 20:51:37.487314   35097 command_runner.go:130] > runtime_type = "oci"
	I0108 20:51:37.487321   35097 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 20:51:37.487325   35097 command_runner.go:130] > runtime_config_path = ""
	I0108 20:51:37.487332   35097 command_runner.go:130] > monitor_path = ""
	I0108 20:51:37.487336   35097 command_runner.go:130] > monitor_cgroup = ""
	I0108 20:51:37.487342   35097 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 20:51:37.487349   35097 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 20:51:37.487355   35097 command_runner.go:130] > # running containers
	I0108 20:51:37.487359   35097 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 20:51:37.487367   35097 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 20:51:37.487392   35097 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 20:51:37.487400   35097 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 20:51:37.487406   35097 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 20:51:37.487413   35097 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 20:51:37.487417   35097 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 20:51:37.487424   35097 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 20:51:37.487429   35097 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 20:51:37.487435   35097 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 20:51:37.487442   35097 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 20:51:37.487449   35097 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 20:51:37.487458   35097 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 20:51:37.487466   35097 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 20:51:37.487475   35097 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 20:51:37.487483   35097 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 20:51:37.487493   35097 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 20:51:37.487502   35097 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 20:51:37.487511   35097 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 20:51:37.487520   35097 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 20:51:37.487526   35097 command_runner.go:130] > # Example:
	I0108 20:51:37.487531   35097 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 20:51:37.487538   35097 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 20:51:37.487543   35097 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 20:51:37.487550   35097 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 20:51:37.487554   35097 command_runner.go:130] > # cpuset = 0
	I0108 20:51:37.487560   35097 command_runner.go:130] > # cpushares = "0-1"
	I0108 20:51:37.487564   35097 command_runner.go:130] > # Where:
	I0108 20:51:37.487571   35097 command_runner.go:130] > # The workload name is workload-type.
	I0108 20:51:37.487578   35097 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 20:51:37.487586   35097 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 20:51:37.487592   35097 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 20:51:37.487602   35097 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 20:51:37.487609   35097 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 20:51:37.487615   35097 command_runner.go:130] > # 
	I0108 20:51:37.487622   35097 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 20:51:37.487627   35097 command_runner.go:130] > #
	I0108 20:51:37.487633   35097 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 20:51:37.487641   35097 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 20:51:37.487649   35097 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 20:51:37.487658   35097 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 20:51:37.487665   35097 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 20:51:37.487672   35097 command_runner.go:130] > [crio.image]
	I0108 20:51:37.487678   35097 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 20:51:37.487685   35097 command_runner.go:130] > # default_transport = "docker://"
	I0108 20:51:37.487691   35097 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 20:51:37.487699   35097 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:51:37.487705   35097 command_runner.go:130] > # global_auth_file = ""
	I0108 20:51:37.487711   35097 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 20:51:37.487720   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:51:37.487727   35097 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 20:51:37.487733   35097 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 20:51:37.487742   35097 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:51:37.487749   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:51:37.487753   35097 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 20:51:37.487761   35097 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 20:51:37.487770   35097 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 20:51:37.487777   35097 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 20:51:37.487785   35097 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 20:51:37.487791   35097 command_runner.go:130] > # pause_command = "/pause"
	I0108 20:51:37.487798   35097 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 20:51:37.487806   35097 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 20:51:37.487814   35097 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 20:51:37.487822   35097 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 20:51:37.487828   35097 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 20:51:37.487834   35097 command_runner.go:130] > # signature_policy = ""
	I0108 20:51:37.487840   35097 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 20:51:37.487849   35097 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 20:51:37.487853   35097 command_runner.go:130] > # changing them here.
	I0108 20:51:37.487857   35097 command_runner.go:130] > # insecure_registries = [
	I0108 20:51:37.487861   35097 command_runner.go:130] > # ]
	I0108 20:51:37.487868   35097 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 20:51:37.487876   35097 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 20:51:37.487881   35097 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 20:51:37.487888   35097 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 20:51:37.487893   35097 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 20:51:37.487901   35097 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 20:51:37.487906   35097 command_runner.go:130] > # CNI plugins.
	I0108 20:51:37.487912   35097 command_runner.go:130] > [crio.network]
	I0108 20:51:37.487917   35097 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 20:51:37.487923   35097 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 20:51:37.487931   35097 command_runner.go:130] > # cni_default_network = ""
	I0108 20:51:37.487938   35097 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 20:51:37.487943   35097 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 20:51:37.487949   35097 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 20:51:37.487952   35097 command_runner.go:130] > # plugin_dirs = [
	I0108 20:51:37.487956   35097 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 20:51:37.487961   35097 command_runner.go:130] > # ]
	I0108 20:51:37.487967   35097 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 20:51:37.487974   35097 command_runner.go:130] > [crio.metrics]
	I0108 20:51:37.487979   35097 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 20:51:37.487985   35097 command_runner.go:130] > enable_metrics = true
	I0108 20:51:37.487990   35097 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 20:51:37.487997   35097 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 20:51:37.488003   35097 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 20:51:37.488011   35097 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 20:51:37.488017   35097 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 20:51:37.488022   35097 command_runner.go:130] > # metrics_collectors = [
	I0108 20:51:37.488026   35097 command_runner.go:130] > # 	"operations",
	I0108 20:51:37.488031   35097 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 20:51:37.488036   35097 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 20:51:37.488040   35097 command_runner.go:130] > # 	"operations_errors",
	I0108 20:51:37.488047   35097 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 20:51:37.488051   35097 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 20:51:37.488055   35097 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 20:51:37.488062   35097 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 20:51:37.488066   35097 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 20:51:37.488072   35097 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 20:51:37.488077   35097 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 20:51:37.488083   35097 command_runner.go:130] > # 	"containers_oom_total",
	I0108 20:51:37.488087   35097 command_runner.go:130] > # 	"containers_oom",
	I0108 20:51:37.488116   35097 command_runner.go:130] > # 	"processes_defunct",
	I0108 20:51:37.488124   35097 command_runner.go:130] > # 	"operations_total",
	I0108 20:51:37.488134   35097 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 20:51:37.488141   35097 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 20:51:37.488151   35097 command_runner.go:130] > # 	"operations_errors_total",
	I0108 20:51:37.488161   35097 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 20:51:37.488170   35097 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 20:51:37.488181   35097 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 20:51:37.488191   35097 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 20:51:37.488201   35097 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 20:51:37.488209   35097 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 20:51:37.488222   35097 command_runner.go:130] > # ]
	I0108 20:51:37.488233   35097 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 20:51:37.488243   35097 command_runner.go:130] > # metrics_port = 9090
	I0108 20:51:37.488251   35097 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 20:51:37.488257   35097 command_runner.go:130] > # metrics_socket = ""
	I0108 20:51:37.488262   35097 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 20:51:37.488271   35097 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 20:51:37.488279   35097 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 20:51:37.488287   35097 command_runner.go:130] > # certificate on any modification event.
	I0108 20:51:37.488291   35097 command_runner.go:130] > # metrics_cert = ""
	I0108 20:51:37.488299   35097 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 20:51:37.488304   35097 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 20:51:37.488310   35097 command_runner.go:130] > # metrics_key = ""
	I0108 20:51:37.488316   35097 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 20:51:37.488323   35097 command_runner.go:130] > [crio.tracing]
	I0108 20:51:37.488329   35097 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 20:51:37.488335   35097 command_runner.go:130] > # enable_tracing = false
	I0108 20:51:37.488340   35097 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 20:51:37.488347   35097 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 20:51:37.488352   35097 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 20:51:37.488360   35097 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 20:51:37.488368   35097 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 20:51:37.488375   35097 command_runner.go:130] > [crio.stats]
	I0108 20:51:37.488381   35097 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 20:51:37.488388   35097 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 20:51:37.488393   35097 command_runner.go:130] > # stats_collection_period = 0
	I0108 20:51:37.488452   35097 cni.go:84] Creating CNI manager for ""
	I0108 20:51:37.488460   35097 cni.go:136] 3 nodes found, recommending kindnet
	I0108 20:51:37.488475   35097 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:51:37.488493   35097 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-340815 NodeName:multinode-340815-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:51:37.488601   35097 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-340815-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:51:37.488647   35097 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-340815-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:51:37.488695   35097 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:51:37.497414   35097 command_runner.go:130] > kubeadm
	I0108 20:51:37.497432   35097 command_runner.go:130] > kubectl
	I0108 20:51:37.497437   35097 command_runner.go:130] > kubelet
	I0108 20:51:37.497563   35097 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:51:37.497632   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 20:51:37.505981   35097 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0108 20:51:37.522904   35097 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:51:37.539266   35097 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0108 20:51:37.542869   35097 command_runner.go:130] > 192.168.39.196	control-plane.minikube.internal
	I0108 20:51:37.543136   35097 host.go:66] Checking if "multinode-340815" exists ...
	I0108 20:51:37.543464   35097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:51:37.543467   35097 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:51:37.543490   35097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:51:37.558352   35097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0108 20:51:37.558782   35097 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:51:37.559238   35097 main.go:141] libmachine: Using API Version  1
	I0108 20:51:37.559260   35097 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:51:37.559565   35097 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:51:37.559747   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:51:37.559927   35097 start.go:304] JoinCluster: &{Name:multinode-340815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:51:37.560035   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 20:51:37.560058   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:51:37.562951   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:51:37.563372   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:51:37.563393   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:51:37.563546   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:51:37.563698   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:51:37.563823   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:51:37.563951   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:51:37.756321   35097 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ipgtlg.qs7uwchvaqxs5got --discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 
	I0108 20:51:37.760025   35097 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:51:37.760071   35097 host.go:66] Checking if "multinode-340815" exists ...
	I0108 20:51:37.760498   35097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:51:37.760538   35097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:51:37.774934   35097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0108 20:51:37.775319   35097 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:51:37.775801   35097 main.go:141] libmachine: Using API Version  1
	I0108 20:51:37.775823   35097 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:51:37.776156   35097 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:51:37.776337   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:51:37.776518   35097 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-340815-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0108 20:51:37.776547   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:51:37.779524   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:51:37.779896   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:51:37.779922   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:51:37.780145   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:51:37.780340   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:51:37.780488   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:51:37.780630   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:51:37.949492   35097 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0108 20:51:38.003427   35097 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-tqjx8, kube-system/kube-proxy-j5w6d
	I0108 20:51:41.029554   35097 command_runner.go:130] > node/multinode-340815-m02 cordoned
	I0108 20:51:41.029586   35097 command_runner.go:130] > pod "busybox-5bc68d56bd-95tbd" has DeletionTimestamp older than 1 seconds, skipping
	I0108 20:51:41.029598   35097 command_runner.go:130] > node/multinode-340815-m02 drained
	I0108 20:51:41.029621   35097 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-340815-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.253068128s)
	I0108 20:51:41.029641   35097 node.go:108] successfully drained node "m02"
	I0108 20:51:41.029972   35097 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:51:41.030166   35097 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:51:41.030478   35097 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0108 20:51:41.030526   35097 round_trippers.go:463] DELETE https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:51:41.030534   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:41.030541   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:41.030547   35097 round_trippers.go:473]     Content-Type: application/json
	I0108 20:51:41.030552   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:41.049211   35097 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0108 20:51:41.049241   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:41.049248   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:41.049254   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:41.049259   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:41.049267   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:41.049272   35097 round_trippers.go:580]     Content-Length: 171
	I0108 20:51:41.049280   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:41 GMT
	I0108 20:51:41.049289   35097 round_trippers.go:580]     Audit-Id: 985d018e-3b6e-4964-bf38-0ae2aa993a73
	I0108 20:51:41.049314   35097 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-340815-m02","kind":"nodes","uid":"7d3787a8-1ccb-4d1a-b330-2c517ae59e99"}}
	I0108 20:51:41.049351   35097 node.go:124] successfully deleted node "m02"
	I0108 20:51:41.049363   35097 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:51:41.049389   35097 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:51:41.049414   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ipgtlg.qs7uwchvaqxs5got --discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-340815-m02"
	I0108 20:51:41.102682   35097 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 20:51:41.281124   35097 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 20:51:41.281162   35097 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 20:51:41.352207   35097 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:51:41.352238   35097 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:51:41.352248   35097 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 20:51:41.504221   35097 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 20:51:42.027949   35097 command_runner.go:130] > This node has joined the cluster:
	I0108 20:51:42.027980   35097 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 20:51:42.027990   35097 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 20:51:42.028005   35097 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 20:51:42.031103   35097 command_runner.go:130] ! W0108 20:51:41.096168    2666 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 20:51:42.031132   35097 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 20:51:42.031146   35097 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 20:51:42.031162   35097 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 20:51:42.031191   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 20:51:42.332868   35097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-340815 minikube.k8s.io/updated_at=2024_01_08T20_51_42_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:51:42.445643   35097 command_runner.go:130] > node/multinode-340815-m02 labeled
	I0108 20:51:42.463857   35097 command_runner.go:130] > node/multinode-340815-m03 labeled
	I0108 20:51:42.466128   35097 start.go:306] JoinCluster complete in 4.906197043s
	I0108 20:51:42.466156   35097 cni.go:84] Creating CNI manager for ""
	I0108 20:51:42.466161   35097 cni.go:136] 3 nodes found, recommending kindnet
	I0108 20:51:42.466205   35097 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:51:42.472631   35097 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 20:51:42.472655   35097 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 20:51:42.472665   35097 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 20:51:42.472675   35097 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:51:42.472684   35097 command_runner.go:130] > Access: 2024-01-08 20:49:02.982432026 +0000
	I0108 20:51:42.472696   35097 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 20:51:42.472708   35097 command_runner.go:130] > Change: 2024-01-08 20:49:01.008432026 +0000
	I0108 20:51:42.472716   35097 command_runner.go:130] >  Birth: -
	I0108 20:51:42.473218   35097 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:51:42.473235   35097 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:51:42.491477   35097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:51:42.848488   35097 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:51:42.848514   35097 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:51:42.848521   35097 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 20:51:42.848525   35097 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 20:51:42.848977   35097 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:51:42.849174   35097 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:51:42.849453   35097 round_trippers.go:463] GET https://192.168.39.196:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:51:42.849464   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:42.849471   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:42.849477   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:42.851393   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:51:42.851410   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:42.851421   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:42.851431   35097 round_trippers.go:580]     Content-Length: 291
	I0108 20:51:42.851447   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:42 GMT
	I0108 20:51:42.851453   35097 round_trippers.go:580]     Audit-Id: d52f48d5-1aea-43ee-91f6-28aa9919e25c
	I0108 20:51:42.851458   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:42.851463   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:42.851468   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:42.851488   35097 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a90ea09-afeb-4dda-ab10-18a22e37ea78","resourceVersion":"928","creationTimestamp":"2024-01-08T20:38:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 20:51:42.851572   35097 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-340815" context rescaled to 1 replicas
	I0108 20:51:42.851604   35097 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0108 20:51:42.853739   35097 out.go:177] * Verifying Kubernetes components...
	I0108 20:51:42.855408   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:51:42.871565   35097 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:51:42.871770   35097 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:51:42.871995   35097 node_ready.go:35] waiting up to 6m0s for node "multinode-340815-m02" to be "Ready" ...
	I0108 20:51:42.872058   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:51:42.872065   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:42.872073   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:42.872078   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:42.874652   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:42.874670   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:42.874679   35097 round_trippers.go:580]     Audit-Id: a574958f-f1a3-4070-9ef5-f68bbd6fd2cf
	I0108 20:51:42.874685   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:42.874690   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:42.874695   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:42.874700   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:42.874705   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:42 GMT
	I0108 20:51:42.875011   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"a3509707-a676-45da-aba0-ccedece9b18c","resourceVersion":"1087","creationTimestamp":"2024-01-08T20:51:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_51_42_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:51:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0108 20:51:42.875288   35097 node_ready.go:49] node "multinode-340815-m02" has status "Ready":"True"
	I0108 20:51:42.875304   35097 node_ready.go:38] duration metric: took 3.294972ms waiting for node "multinode-340815-m02" to be "Ready" ...
	I0108 20:51:42.875313   35097 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:51:42.875360   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:51:42.875376   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:42.875383   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:42.875389   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:42.881391   35097 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0108 20:51:42.881416   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:42.881423   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:42.881429   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:42 GMT
	I0108 20:51:42.881434   35097 round_trippers.go:580]     Audit-Id: 874c3c37-c252-4d11-85ce-16409707bde8
	I0108 20:51:42.881440   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:42.881445   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:42.881451   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:42.883425   35097 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1095"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"924","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82238 chars]
	I0108 20:51:42.886121   35097 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:42.886224   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:51:42.886233   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:42.886240   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:42.886248   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:42.888490   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:42.888507   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:42.888517   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:42.888525   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:42.888533   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:42 GMT
	I0108 20:51:42.888542   35097 round_trippers.go:580]     Audit-Id: c2bb4ce4-d663-45f2-9723-3989c6c459af
	I0108 20:51:42.888554   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:42.888564   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:42.888859   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"924","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0108 20:51:42.889342   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:51:42.889361   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:42.889371   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:42.889380   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:42.891429   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:42.891446   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:42.891455   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:42.891463   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:42.891473   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:42.891480   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:42.891487   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:42 GMT
	I0108 20:51:42.891496   35097 round_trippers.go:580]     Audit-Id: ea32b557-53a6-4df5-a7cf-9bfafdf54251
	I0108 20:51:42.891763   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:51:42.892139   35097 pod_ready.go:92] pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace has status "Ready":"True"
	I0108 20:51:42.892158   35097 pod_ready.go:81] duration metric: took 6.01462ms waiting for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:42.892166   35097 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:42.892224   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-340815
	I0108 20:51:42.892235   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:42.892245   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:42.892255   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:42.894419   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:42.894437   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:42.894445   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:42.894452   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:42.894460   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:42 GMT
	I0108 20:51:42.894470   35097 round_trippers.go:580]     Audit-Id: cc090f25-2c9f-4a75-94f6-745ad6b7ba77
	I0108 20:51:42.894478   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:42.894489   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:42.894804   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-340815","namespace":"kube-system","uid":"c6d1e2c4-6dbc-4495-ac68-c4b030195c2c","resourceVersion":"916","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.196:2379","kubernetes.io/config.hash":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.mirror":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.seen":"2024-01-08T20:38:05.870869333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0108 20:51:42.895185   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:51:42.895198   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:42.895205   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:42.895210   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:42.897301   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:42.897318   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:42.897327   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:42 GMT
	I0108 20:51:42.897334   35097 round_trippers.go:580]     Audit-Id: bb4bafef-3dcc-4011-b4a1-ac78acef91df
	I0108 20:51:42.897342   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:42.897349   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:42.897366   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:42.897378   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:42.897477   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:51:42.897736   35097 pod_ready.go:92] pod "etcd-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:51:42.897748   35097 pod_ready.go:81] duration metric: took 5.572988ms waiting for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:42.897763   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:42.897804   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-340815
	I0108 20:51:42.897813   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:42.897823   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:42.897831   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:42.899801   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:51:42.899819   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:42.899828   35097 round_trippers.go:580]     Audit-Id: 8a4816f6-b6c4-4bfd-a359-a6931e26da6e
	I0108 20:51:42.899837   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:42.899845   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:42.899861   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:42.899869   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:42.899884   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:42 GMT
	I0108 20:51:42.900044   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-340815","namespace":"kube-system","uid":"523b3dcf-2fae-43b4-a9c6-cd2337ae6d6f","resourceVersion":"914","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.196:8443","kubernetes.io/config.hash":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.mirror":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.seen":"2024-01-08T20:38:05.870870627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0108 20:51:42.900500   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:51:42.900517   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:42.900528   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:42.900537   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:42.902105   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:51:42.902122   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:42.902132   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:42 GMT
	I0108 20:51:42.902140   35097 round_trippers.go:580]     Audit-Id: 99280b39-6792-4051-8ce5-0b9ee84228aa
	I0108 20:51:42.902147   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:42.902154   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:42.902161   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:42.902169   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:42.902350   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:51:42.902662   35097 pod_ready.go:92] pod "kube-apiserver-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:51:42.902678   35097 pod_ready.go:81] duration metric: took 4.907337ms waiting for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:42.902689   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:42.902764   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-340815
	I0108 20:51:42.902774   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:42.902784   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:42.902794   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:42.905087   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:42.905101   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:42.905108   35097 round_trippers.go:580]     Audit-Id: d78dcf7d-8583-4403-b3c3-513d9d8b331b
	I0108 20:51:42.905116   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:42.905124   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:42.905132   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:42.905141   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:42.905152   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:42 GMT
	I0108 20:51:42.905330   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-340815","namespace":"kube-system","uid":"3b29ca3f-d23b-4add-a5fb-d59381398862","resourceVersion":"912","creationTimestamp":"2024-01-08T20:38:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.mirror":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.seen":"2024-01-08T20:37:56.785419514Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0108 20:51:42.905684   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:51:42.905698   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:42.905718   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:42.905727   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:42.907847   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:42.907858   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:42.907864   35097 round_trippers.go:580]     Audit-Id: 6e500e86-651a-4b89-a6ea-1f85ffee241a
	I0108 20:51:42.907870   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:42.907875   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:42.907882   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:42.907891   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:42.907899   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:42 GMT
	I0108 20:51:42.908111   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:51:42.908484   35097 pod_ready.go:92] pod "kube-controller-manager-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:51:42.908500   35097 pod_ready.go:81] duration metric: took 5.803373ms waiting for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:42.908513   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j5w6d" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:43.072971   35097 request.go:629] Waited for 164.398291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:51:43.073029   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:51:43.073033   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:43.073041   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:43.073047   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:43.076147   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:51:43.076175   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:43.076186   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:43.076195   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:43.076202   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:43.076209   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:43 GMT
	I0108 20:51:43.076217   35097 round_trippers.go:580]     Audit-Id: 66599b01-0d77-4cc9-9d1e-1269d3884329
	I0108 20:51:43.076226   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:43.076456   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5w6d","generateName":"kube-proxy-","namespace":"kube-system","uid":"61568130-b69e-48ce-86f0-9a9e63ed99ab","resourceVersion":"1093","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0108 20:51:43.272218   35097 request.go:629] Waited for 195.221869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:51:43.272313   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:51:43.272320   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:43.272331   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:43.272340   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:43.275186   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:43.275212   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:43.275221   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:43.275229   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:43.275254   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:43.275277   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:43 GMT
	I0108 20:51:43.275299   35097 round_trippers.go:580]     Audit-Id: b8363b75-9ba0-4097-9591-b1fd6c9d78fd
	I0108 20:51:43.275310   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:43.275433   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"a3509707-a676-45da-aba0-ccedece9b18c","resourceVersion":"1087","creationTimestamp":"2024-01-08T20:51:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_51_42_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:51:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0108 20:51:43.472930   35097 request.go:629] Waited for 63.276642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:51:43.473009   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:51:43.473014   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:43.473022   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:43.473028   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:43.477212   35097 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:51:43.477240   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:43.477252   35097 round_trippers.go:580]     Audit-Id: e27fa293-f043-42f1-a3cd-f7a42f0dd7b8
	I0108 20:51:43.477261   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:43.477270   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:43.477279   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:43.477294   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:43.477302   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:43 GMT
	I0108 20:51:43.477438   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5w6d","generateName":"kube-proxy-","namespace":"kube-system","uid":"61568130-b69e-48ce-86f0-9a9e63ed99ab","resourceVersion":"1093","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0108 20:51:43.672276   35097 request.go:629] Waited for 194.322014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:51:43.672348   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:51:43.672353   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:43.672360   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:43.672368   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:43.675416   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:51:43.675441   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:43.675450   35097 round_trippers.go:580]     Audit-Id: 0c391574-af37-4da6-a578-e16cc0dc8469
	I0108 20:51:43.675458   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:43.675465   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:43.675472   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:43.675481   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:43.675489   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:43 GMT
	I0108 20:51:43.675743   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"a3509707-a676-45da-aba0-ccedece9b18c","resourceVersion":"1087","creationTimestamp":"2024-01-08T20:51:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_51_42_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:51:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0108 20:51:43.909261   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:51:43.909287   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:43.909296   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:43.909302   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:43.912122   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:43.912141   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:43.912148   35097 round_trippers.go:580]     Audit-Id: 8c3aebba-5123-458d-a403-47517cfc196a
	I0108 20:51:43.912153   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:43.912161   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:43.912169   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:43.912180   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:43.912188   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:43 GMT
	I0108 20:51:43.912344   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5w6d","generateName":"kube-proxy-","namespace":"kube-system","uid":"61568130-b69e-48ce-86f0-9a9e63ed99ab","resourceVersion":"1103","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0108 20:51:44.073115   35097 request.go:629] Waited for 160.332598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:51:44.073196   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:51:44.073211   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:44.073223   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:44.073232   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:44.077566   35097 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:51:44.077592   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:44.077603   35097 round_trippers.go:580]     Audit-Id: 90d7f324-21bd-41a0-b7c5-e309ef4e2607
	I0108 20:51:44.077609   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:44.077618   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:44.077626   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:44.077634   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:44.077642   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:44 GMT
	I0108 20:51:44.077904   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"a3509707-a676-45da-aba0-ccedece9b18c","resourceVersion":"1087","creationTimestamp":"2024-01-08T20:51:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_51_42_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:51:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0108 20:51:44.078151   35097 pod_ready.go:92] pod "kube-proxy-j5w6d" in "kube-system" namespace has status "Ready":"True"
	I0108 20:51:44.078166   35097 pod_ready.go:81] duration metric: took 1.169642469s waiting for pod "kube-proxy-j5w6d" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:44.078177   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lxkrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:44.272652   35097 request.go:629] Waited for 194.416272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lxkrv
	I0108 20:51:44.272730   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lxkrv
	I0108 20:51:44.272736   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:44.272743   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:44.272749   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:44.275606   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:44.275633   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:44.275642   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:44 GMT
	I0108 20:51:44.275647   35097 round_trippers.go:580]     Audit-Id: 3f696239-b9e8-4504-aa75-c9e0e6d3c201
	I0108 20:51:44.275652   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:44.275658   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:44.275663   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:44.275668   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:44.275887   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lxkrv","generateName":"kube-proxy-","namespace":"kube-system","uid":"d7fed398-b2ff-4ec4-a1a6-d0a7b8dca989","resourceVersion":"739","creationTimestamp":"2024-01-08T20:40:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:40:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0108 20:51:44.472451   35097 request.go:629] Waited for 196.129759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:51:44.472530   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:51:44.472538   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:44.472549   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:44.472562   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:44.475394   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:44.475414   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:44.475420   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:44.475426   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:44.475431   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:44.475436   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:44.475444   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:44 GMT
	I0108 20:51:44.475451   35097 round_trippers.go:580]     Audit-Id: 231e2338-5eb1-422e-8d03-6b876e225642
	I0108 20:51:44.475552   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m03","uid":"f402a58c-763c-4188-b0f9-533674f03d66","resourceVersion":"1089","creationTimestamp":"2024-01-08T20:41:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_51_42_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:41:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I0108 20:51:44.475842   35097 pod_ready.go:92] pod "kube-proxy-lxkrv" in "kube-system" namespace has status "Ready":"True"
	I0108 20:51:44.475860   35097 pod_ready.go:81] duration metric: took 397.6754ms waiting for pod "kube-proxy-lxkrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:44.475870   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:44.673090   35097 request.go:629] Waited for 197.157844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9xrv
	I0108 20:51:44.673174   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9xrv
	I0108 20:51:44.673180   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:44.673191   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:44.673202   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:44.677092   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:51:44.677115   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:44.677122   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:44 GMT
	I0108 20:51:44.677127   35097 round_trippers.go:580]     Audit-Id: 5f6f55cc-0086-4436-bd75-71a36032f61e
	I0108 20:51:44.677133   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:44.677138   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:44.677143   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:44.677148   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:44.677311   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z9xrv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a0843325-2adf-4c2f-8489-067554648b52","resourceVersion":"810","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 20:51:44.873136   35097 request.go:629] Waited for 195.435263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:51:44.873210   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:51:44.873215   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:44.873222   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:44.873228   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:44.876024   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:44.876042   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:44.876048   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:44.876054   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:44 GMT
	I0108 20:51:44.876066   35097 round_trippers.go:580]     Audit-Id: a9aeb0d9-c0a5-4c5c-a604-602f6b8be3f8
	I0108 20:51:44.876076   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:44.876087   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:44.876114   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:44.876518   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:51:44.876848   35097 pod_ready.go:92] pod "kube-proxy-z9xrv" in "kube-system" namespace has status "Ready":"True"
	I0108 20:51:44.876864   35097 pod_ready.go:81] duration metric: took 400.989214ms waiting for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:44.876874   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:45.073033   35097 request.go:629] Waited for 196.059172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:51:45.073095   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:51:45.073100   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:45.073108   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:45.073115   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:45.075958   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:45.075985   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:45.075995   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:45.076001   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:45.076006   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:45.076011   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:45.076018   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:45 GMT
	I0108 20:51:45.076028   35097 round_trippers.go:580]     Audit-Id: 8d90a7d0-fa33-45cb-8cf9-71289ed4b483
	I0108 20:51:45.076209   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-340815","namespace":"kube-system","uid":"008c4fe8-78b1-4326-8452-215037af26d6","resourceVersion":"888","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.mirror":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.seen":"2024-01-08T20:38:05.870865233Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0108 20:51:45.273028   35097 request.go:629] Waited for 196.425564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:51:45.273089   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:51:45.273094   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:45.273102   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:45.273108   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:45.275735   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:51:45.275759   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:45.275768   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:45.275778   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:45.275786   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:45.275798   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:45 GMT
	I0108 20:51:45.275810   35097 round_trippers.go:580]     Audit-Id: cff539f1-be3a-469a-800b-a64524a658fd
	I0108 20:51:45.275815   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:45.276159   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:51:45.276494   35097 pod_ready.go:92] pod "kube-scheduler-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:51:45.276510   35097 pod_ready.go:81] duration metric: took 399.621445ms waiting for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:51:45.276520   35097 pod_ready.go:38] duration metric: took 2.401200371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:51:45.276533   35097 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:51:45.276583   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:51:45.291083   35097 system_svc.go:56] duration metric: took 14.539669ms WaitForService to wait for kubelet.
	I0108 20:51:45.291113   35097 kubeadm.go:581] duration metric: took 2.439469969s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:51:45.291130   35097 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:51:45.472490   35097 request.go:629] Waited for 181.293507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0108 20:51:45.472560   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0108 20:51:45.472565   35097 round_trippers.go:469] Request Headers:
	I0108 20:51:45.472573   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:51:45.472579   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:51:45.475879   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:51:45.475917   35097 round_trippers.go:577] Response Headers:
	I0108 20:51:45.475925   35097 round_trippers.go:580]     Audit-Id: a081c4c8-835b-4f2c-bfa0-848a9a36d0ee
	I0108 20:51:45.475930   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:51:45.475935   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:51:45.475940   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:51:45.475945   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:51:45.475950   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:51:45 GMT
	I0108 20:51:45.476571   35097 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1106"},"items":[{"metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16210 chars]
	I0108 20:51:45.477133   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:51:45.477150   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:51:45.477159   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:51:45.477163   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:51:45.477166   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:51:45.477170   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:51:45.477173   35097 node_conditions.go:105] duration metric: took 186.039636ms to run NodePressure ...
	I0108 20:51:45.477182   35097 start.go:228] waiting for startup goroutines ...
	I0108 20:51:45.477202   35097 start.go:242] writing updated cluster config ...
	I0108 20:51:45.477632   35097 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:51:45.477710   35097 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json ...
	I0108 20:51:45.480971   35097 out.go:177] * Starting worker node multinode-340815-m03 in cluster multinode-340815
	I0108 20:51:45.482597   35097 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:51:45.482618   35097 cache.go:56] Caching tarball of preloaded images
	I0108 20:51:45.482708   35097 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 20:51:45.482720   35097 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:51:45.482806   35097 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/config.json ...
	I0108 20:51:45.482970   35097 start.go:365] acquiring machines lock for multinode-340815-m03: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 20:51:45.483010   35097 start.go:369] acquired machines lock for "multinode-340815-m03" in 22.094µs
	I0108 20:51:45.483024   35097 start.go:96] Skipping create...Using existing machine configuration
	I0108 20:51:45.483028   35097 fix.go:54] fixHost starting: m03
	I0108 20:51:45.483269   35097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:51:45.483289   35097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:51:45.497861   35097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0108 20:51:45.498318   35097 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:51:45.498806   35097 main.go:141] libmachine: Using API Version  1
	I0108 20:51:45.498827   35097 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:51:45.499129   35097 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:51:45.499302   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .DriverName
	I0108 20:51:45.499466   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetState
	I0108 20:51:45.501291   35097 fix.go:102] recreateIfNeeded on multinode-340815-m03: state=Running err=<nil>
	W0108 20:51:45.501307   35097 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 20:51:45.503610   35097 out.go:177] * Updating the running kvm2 "multinode-340815-m03" VM ...
	I0108 20:51:45.505276   35097 machine.go:88] provisioning docker machine ...
	I0108 20:51:45.505298   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .DriverName
	I0108 20:51:45.505532   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetMachineName
	I0108 20:51:45.505695   35097 buildroot.go:166] provisioning hostname "multinode-340815-m03"
	I0108 20:51:45.505729   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetMachineName
	I0108 20:51:45.505897   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHHostname
	I0108 20:51:45.508523   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:45.509049   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:51:45.509080   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:45.509226   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHPort
	I0108 20:51:45.509428   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:51:45.509614   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:51:45.509765   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHUsername
	I0108 20:51:45.509929   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:51:45.510229   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0108 20:51:45.510242   35097 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-340815-m03 && echo "multinode-340815-m03" | sudo tee /etc/hostname
	I0108 20:51:45.641774   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-340815-m03
	
	I0108 20:51:45.641806   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHHostname
	I0108 20:51:45.644704   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:45.645121   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:51:45.645153   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:45.645301   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHPort
	I0108 20:51:45.645525   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:51:45.645695   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:51:45.645860   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHUsername
	I0108 20:51:45.646041   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:51:45.646337   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0108 20:51:45.646373   35097 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-340815-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-340815-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-340815-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 20:51:45.761116   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 20:51:45.761151   35097 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 20:51:45.761167   35097 buildroot.go:174] setting up certificates
	I0108 20:51:45.761176   35097 provision.go:83] configureAuth start
	I0108 20:51:45.761184   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetMachineName
	I0108 20:51:45.761433   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetIP
	I0108 20:51:45.764021   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:45.764366   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:51:45.764400   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:45.764533   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHHostname
	I0108 20:51:45.767062   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:45.767431   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:51:45.767458   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:45.767639   35097 provision.go:138] copyHostCerts
	I0108 20:51:45.767663   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:51:45.767689   35097 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 20:51:45.767699   35097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 20:51:45.767787   35097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 20:51:45.767884   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:51:45.767902   35097 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 20:51:45.767909   35097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 20:51:45.767937   35097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 20:51:45.767983   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:51:45.767999   35097 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 20:51:45.768005   35097 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 20:51:45.768025   35097 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 20:51:45.768068   35097 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.multinode-340815-m03 san=[192.168.39.249 192.168.39.249 localhost 127.0.0.1 minikube multinode-340815-m03]
	I0108 20:51:46.012422   35097 provision.go:172] copyRemoteCerts
	I0108 20:51:46.012479   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 20:51:46.012501   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHHostname
	I0108 20:51:46.015428   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:46.015832   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:51:46.015864   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:46.016073   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHPort
	I0108 20:51:46.016302   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:51:46.016489   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHUsername
	I0108 20:51:46.016686   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m03/id_rsa Username:docker}
	I0108 20:51:46.105452   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 20:51:46.105521   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 20:51:46.131296   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 20:51:46.131382   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 20:51:46.154468   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 20:51:46.154554   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 20:51:46.178159   35097 provision.go:86] duration metric: configureAuth took 416.972188ms
	I0108 20:51:46.178185   35097 buildroot.go:189] setting minikube options for container-runtime
	I0108 20:51:46.178390   35097 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:51:46.178460   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHHostname
	I0108 20:51:46.181096   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:46.181427   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:51:46.181461   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:51:46.181558   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHPort
	I0108 20:51:46.181777   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:51:46.181970   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:51:46.182132   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHUsername
	I0108 20:51:46.182277   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:51:46.182569   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0108 20:51:46.182582   35097 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 20:53:16.906614   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 20:53:16.906642   35097 machine.go:91] provisioned docker machine in 1m31.401350967s
	I0108 20:53:16.906652   35097 start.go:300] post-start starting for "multinode-340815-m03" (driver="kvm2")
	I0108 20:53:16.906661   35097 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 20:53:16.906678   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .DriverName
	I0108 20:53:16.907014   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 20:53:16.907045   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHHostname
	I0108 20:53:16.910156   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:16.910542   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:53:16.910567   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:16.910749   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHPort
	I0108 20:53:16.910937   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:53:16.911104   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHUsername
	I0108 20:53:16.911235   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m03/id_rsa Username:docker}
	I0108 20:53:17.003571   35097 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 20:53:17.008487   35097 command_runner.go:130] > NAME=Buildroot
	I0108 20:53:17.008518   35097 command_runner.go:130] > VERSION=2021.02.12-1-gae27a7b-dirty
	I0108 20:53:17.008526   35097 command_runner.go:130] > ID=buildroot
	I0108 20:53:17.008534   35097 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 20:53:17.008541   35097 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 20:53:17.008575   35097 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 20:53:17.008592   35097 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 20:53:17.008669   35097 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 20:53:17.008768   35097 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 20:53:17.008781   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /etc/ssl/certs/178962.pem
	I0108 20:53:17.008872   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 20:53:17.017605   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:53:17.043310   35097 start.go:303] post-start completed in 136.646619ms
	I0108 20:53:17.043336   35097 fix.go:56] fixHost completed within 1m31.560307305s
	I0108 20:53:17.043355   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHHostname
	I0108 20:53:17.045955   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:17.046274   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:53:17.046305   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:17.046418   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHPort
	I0108 20:53:17.046623   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:53:17.046809   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:53:17.046958   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHUsername
	I0108 20:53:17.047120   35097 main.go:141] libmachine: Using SSH client type: native
	I0108 20:53:17.047510   35097 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0108 20:53:17.047528   35097 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 20:53:17.165444   35097 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704747197.159778175
	
	I0108 20:53:17.165468   35097 fix.go:206] guest clock: 1704747197.159778175
	I0108 20:53:17.165477   35097 fix.go:219] Guest: 2024-01-08 20:53:17.159778175 +0000 UTC Remote: 2024-01-08 20:53:17.043340186 +0000 UTC m=+565.580817720 (delta=116.437989ms)
	I0108 20:53:17.165494   35097 fix.go:190] guest clock delta is within tolerance: 116.437989ms
	I0108 20:53:17.165500   35097 start.go:83] releasing machines lock for "multinode-340815-m03", held for 1m31.6824812s
	I0108 20:53:17.165530   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .DriverName
	I0108 20:53:17.165817   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetIP
	I0108 20:53:17.168543   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:17.168890   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:53:17.168919   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:17.171492   35097 out.go:177] * Found network options:
	I0108 20:53:17.173141   35097 out.go:177]   - NO_PROXY=192.168.39.196,192.168.39.78
	W0108 20:53:17.174559   35097 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 20:53:17.174583   35097 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 20:53:17.174615   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .DriverName
	I0108 20:53:17.175237   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .DriverName
	I0108 20:53:17.175449   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .DriverName
	I0108 20:53:17.175545   35097 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 20:53:17.175581   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHHostname
	W0108 20:53:17.175654   35097 proxy.go:119] fail to check proxy env: Error ip not in block
	W0108 20:53:17.175680   35097 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 20:53:17.175743   35097 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 20:53:17.175759   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHHostname
	I0108 20:53:17.178291   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:17.178570   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:17.178760   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:53:17.178794   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:17.178944   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHPort
	I0108 20:53:17.179055   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:53:17.179094   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:53:17.179109   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:17.179201   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHPort
	I0108 20:53:17.179268   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHUsername
	I0108 20:53:17.179374   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHKeyPath
	I0108 20:53:17.179466   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m03/id_rsa Username:docker}
	I0108 20:53:17.179530   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetSSHUsername
	I0108 20:53:17.179670   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m03/id_rsa Username:docker}
	I0108 20:53:17.418490   35097 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 20:53:17.418612   35097 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 20:53:17.424644   35097 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 20:53:17.424814   35097 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 20:53:17.424909   35097 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 20:53:17.433500   35097 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 20:53:17.433524   35097 start.go:475] detecting cgroup driver to use...
	I0108 20:53:17.433591   35097 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 20:53:17.447034   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 20:53:17.459393   35097 docker.go:217] disabling cri-docker service (if available) ...
	I0108 20:53:17.459450   35097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 20:53:17.473435   35097 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 20:53:17.486048   35097 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 20:53:17.607439   35097 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 20:53:17.724307   35097 docker.go:233] disabling docker service ...
	I0108 20:53:17.724375   35097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 20:53:17.739067   35097 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 20:53:17.752385   35097 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 20:53:17.870573   35097 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 20:53:17.987017   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 20:53:17.999417   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 20:53:18.017912   35097 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0108 20:53:18.017947   35097 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 20:53:18.017988   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:53:18.027677   35097 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 20:53:18.027737   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:53:18.037760   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:53:18.047855   35097 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 20:53:18.058344   35097 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 20:53:18.070139   35097 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 20:53:18.079464   35097 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 20:53:18.079531   35097 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 20:53:18.088612   35097 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 20:53:18.216395   35097 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 20:53:27.062471   35097 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.846038853s)
	I0108 20:53:27.062520   35097 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 20:53:27.062576   35097 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 20:53:27.068896   35097 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0108 20:53:27.068923   35097 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 20:53:27.068930   35097 command_runner.go:130] > Device: 16h/22d	Inode: 1232        Links: 1
	I0108 20:53:27.068936   35097 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:53:27.068941   35097 command_runner.go:130] > Access: 2024-01-08 20:53:26.975075143 +0000
	I0108 20:53:27.068949   35097 command_runner.go:130] > Modify: 2024-01-08 20:53:26.975075143 +0000
	I0108 20:53:27.068958   35097 command_runner.go:130] > Change: 2024-01-08 20:53:26.975075143 +0000
	I0108 20:53:27.068968   35097 command_runner.go:130] >  Birth: -
	I0108 20:53:27.068988   35097 start.go:543] Will wait 60s for crictl version
	I0108 20:53:27.069033   35097 ssh_runner.go:195] Run: which crictl
	I0108 20:53:27.072886   35097 command_runner.go:130] > /usr/bin/crictl
	I0108 20:53:27.073070   35097 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 20:53:27.115385   35097 command_runner.go:130] > Version:  0.1.0
	I0108 20:53:27.115477   35097 command_runner.go:130] > RuntimeName:  cri-o
	I0108 20:53:27.115727   35097 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0108 20:53:27.115744   35097 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 20:53:27.117252   35097 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 20:53:27.117331   35097 ssh_runner.go:195] Run: crio --version
	I0108 20:53:27.160772   35097 command_runner.go:130] > crio version 1.24.1
	I0108 20:53:27.160801   35097 command_runner.go:130] > Version:          1.24.1
	I0108 20:53:27.160813   35097 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 20:53:27.160819   35097 command_runner.go:130] > GitTreeState:     dirty
	I0108 20:53:27.160827   35097 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 20:53:27.160834   35097 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 20:53:27.160841   35097 command_runner.go:130] > Compiler:         gc
	I0108 20:53:27.160848   35097 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:53:27.160857   35097 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:53:27.160868   35097 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:53:27.160875   35097 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:53:27.160880   35097 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:53:27.160961   35097 ssh_runner.go:195] Run: crio --version
	I0108 20:53:27.207894   35097 command_runner.go:130] > crio version 1.24.1
	I0108 20:53:27.207923   35097 command_runner.go:130] > Version:          1.24.1
	I0108 20:53:27.207934   35097 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0108 20:53:27.207941   35097 command_runner.go:130] > GitTreeState:     dirty
	I0108 20:53:27.207953   35097 command_runner.go:130] > BuildDate:        2023-12-16T11:46:37Z
	I0108 20:53:27.207960   35097 command_runner.go:130] > GoVersion:        go1.19.9
	I0108 20:53:27.207967   35097 command_runner.go:130] > Compiler:         gc
	I0108 20:53:27.207973   35097 command_runner.go:130] > Platform:         linux/amd64
	I0108 20:53:27.207985   35097 command_runner.go:130] > Linkmode:         dynamic
	I0108 20:53:27.207996   35097 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0108 20:53:27.208005   35097 command_runner.go:130] > SeccompEnabled:   true
	I0108 20:53:27.208011   35097 command_runner.go:130] > AppArmorEnabled:  false
	I0108 20:53:27.210558   35097 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 20:53:27.212198   35097 out.go:177]   - env NO_PROXY=192.168.39.196
	I0108 20:53:27.213649   35097 out.go:177]   - env NO_PROXY=192.168.39.196,192.168.39.78
	I0108 20:53:27.214965   35097 main.go:141] libmachine: (multinode-340815-m03) Calling .GetIP
	I0108 20:53:27.217417   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:27.217850   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:01:bc", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:41:29 +0000 UTC Type:0 Mac:52:54:00:9e:01:bc Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-340815-m03 Clientid:01:52:54:00:9e:01:bc}
	I0108 20:53:27.217877   35097 main.go:141] libmachine: (multinode-340815-m03) DBG | domain multinode-340815-m03 has defined IP address 192.168.39.249 and MAC address 52:54:00:9e:01:bc in network mk-multinode-340815
	I0108 20:53:27.218090   35097 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 20:53:27.222264   35097 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0108 20:53:27.222482   35097 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815 for IP: 192.168.39.249
	I0108 20:53:27.222506   35097 certs.go:190] acquiring lock for shared ca certs: {Name:mke01aa9d73e320a9a3907677cf29c75f0fa86d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:53:27.222621   35097 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key
	I0108 20:53:27.222655   35097 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key
	I0108 20:53:27.222665   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 20:53:27.222677   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 20:53:27.222689   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 20:53:27.222701   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 20:53:27.222753   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem (1338 bytes)
	W0108 20:53:27.222782   35097 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896_empty.pem, impossibly tiny 0 bytes
	I0108 20:53:27.222791   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 20:53:27.222816   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem (1082 bytes)
	I0108 20:53:27.222844   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem (1123 bytes)
	I0108 20:53:27.222866   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem (1675 bytes)
	I0108 20:53:27.222901   35097 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem (1708 bytes)
	I0108 20:53:27.222928   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> /usr/share/ca-certificates/178962.pem
	I0108 20:53:27.222941   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:53:27.222952   35097 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem -> /usr/share/ca-certificates/17896.pem
	I0108 20:53:27.223270   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 20:53:27.248107   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 20:53:27.271017   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 20:53:27.293750   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 20:53:27.314109   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /usr/share/ca-certificates/178962.pem (1708 bytes)
	I0108 20:53:27.338520   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 20:53:27.362015   35097 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem --> /usr/share/ca-certificates/17896.pem (1338 bytes)
	I0108 20:53:27.385653   35097 ssh_runner.go:195] Run: openssl version
	I0108 20:53:27.391303   35097 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 20:53:27.391389   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 20:53:27.401626   35097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:53:27.406683   35097 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:53:27.406823   35097 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:53:27.406886   35097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 20:53:27.412704   35097 command_runner.go:130] > b5213941
	I0108 20:53:27.412778   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 20:53:27.421565   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17896.pem && ln -fs /usr/share/ca-certificates/17896.pem /etc/ssl/certs/17896.pem"
	I0108 20:53:27.431965   35097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17896.pem
	I0108 20:53:27.436632   35097 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 20:53:27.436784   35097 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 20:53:27.436842   35097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17896.pem
	I0108 20:53:27.442877   35097 command_runner.go:130] > 51391683
	I0108 20:53:27.442951   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17896.pem /etc/ssl/certs/51391683.0"
	I0108 20:53:27.452123   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178962.pem && ln -fs /usr/share/ca-certificates/178962.pem /etc/ssl/certs/178962.pem"
	I0108 20:53:27.462245   35097 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178962.pem
	I0108 20:53:27.466657   35097 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 20:53:27.466854   35097 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 20:53:27.466913   35097 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178962.pem
	I0108 20:53:27.472405   35097 command_runner.go:130] > 3ec20f2e
	I0108 20:53:27.472689   35097 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178962.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 20:53:27.481350   35097 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 20:53:27.485594   35097 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:53:27.485721   35097 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 20:53:27.485807   35097 ssh_runner.go:195] Run: crio config
	I0108 20:53:27.548665   35097 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0108 20:53:27.548691   35097 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0108 20:53:27.548700   35097 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0108 20:53:27.548704   35097 command_runner.go:130] > #
	I0108 20:53:27.548711   35097 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0108 20:53:27.548718   35097 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0108 20:53:27.548727   35097 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0108 20:53:27.548737   35097 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0108 20:53:27.548742   35097 command_runner.go:130] > # reload'.
	I0108 20:53:27.548751   35097 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0108 20:53:27.548762   35097 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0108 20:53:27.548773   35097 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0108 20:53:27.548787   35097 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0108 20:53:27.548792   35097 command_runner.go:130] > [crio]
	I0108 20:53:27.548798   35097 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0108 20:53:27.548806   35097 command_runner.go:130] > # containers images, in this directory.
	I0108 20:53:27.548812   35097 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0108 20:53:27.548831   35097 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0108 20:53:27.548922   35097 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0108 20:53:27.548940   35097 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0108 20:53:27.548947   35097 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0108 20:53:27.549282   35097 command_runner.go:130] > storage_driver = "overlay"
	I0108 20:53:27.549314   35097 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0108 20:53:27.549330   35097 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0108 20:53:27.549337   35097 command_runner.go:130] > storage_option = [
	I0108 20:53:27.549473   35097 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0108 20:53:27.549522   35097 command_runner.go:130] > ]
	I0108 20:53:27.549539   35097 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0108 20:53:27.549550   35097 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0108 20:53:27.549841   35097 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0108 20:53:27.549856   35097 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0108 20:53:27.549866   35097 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0108 20:53:27.549874   35097 command_runner.go:130] > # always happen on a node reboot
	I0108 20:53:27.550388   35097 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0108 20:53:27.550407   35097 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0108 20:53:27.550417   35097 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0108 20:53:27.550436   35097 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0108 20:53:27.550768   35097 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0108 20:53:27.550784   35097 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0108 20:53:27.550797   35097 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0108 20:53:27.551288   35097 command_runner.go:130] > # internal_wipe = true
	I0108 20:53:27.551305   35097 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0108 20:53:27.551315   35097 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0108 20:53:27.551324   35097 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0108 20:53:27.551590   35097 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0108 20:53:27.551605   35097 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0108 20:53:27.551611   35097 command_runner.go:130] > [crio.api]
	I0108 20:53:27.551621   35097 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0108 20:53:27.551742   35097 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0108 20:53:27.551760   35097 command_runner.go:130] > # IP address on which the stream server will listen.
	I0108 20:53:27.552051   35097 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0108 20:53:27.552066   35097 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0108 20:53:27.552072   35097 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0108 20:53:27.552473   35097 command_runner.go:130] > # stream_port = "0"
	I0108 20:53:27.552491   35097 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0108 20:53:27.552809   35097 command_runner.go:130] > # stream_enable_tls = false
	I0108 20:53:27.552832   35097 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0108 20:53:27.553206   35097 command_runner.go:130] > # stream_idle_timeout = ""
	I0108 20:53:27.553221   35097 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0108 20:53:27.553227   35097 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0108 20:53:27.553231   35097 command_runner.go:130] > # minutes.
	I0108 20:53:27.553432   35097 command_runner.go:130] > # stream_tls_cert = ""
	I0108 20:53:27.553446   35097 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0108 20:53:27.553459   35097 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0108 20:53:27.553653   35097 command_runner.go:130] > # stream_tls_key = ""
	I0108 20:53:27.553669   35097 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0108 20:53:27.553681   35097 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0108 20:53:27.553693   35097 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0108 20:53:27.553747   35097 command_runner.go:130] > # stream_tls_ca = ""
	I0108 20:53:27.553776   35097 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:53:27.553848   35097 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0108 20:53:27.553867   35097 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0108 20:53:27.554139   35097 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0108 20:53:27.554164   35097 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0108 20:53:27.554174   35097 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0108 20:53:27.554188   35097 command_runner.go:130] > [crio.runtime]
	I0108 20:53:27.554197   35097 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0108 20:53:27.554214   35097 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0108 20:53:27.554221   35097 command_runner.go:130] > # "nofile=1024:2048"
	I0108 20:53:27.554231   35097 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0108 20:53:27.554348   35097 command_runner.go:130] > # default_ulimits = [
	I0108 20:53:27.554472   35097 command_runner.go:130] > # ]
	I0108 20:53:27.554487   35097 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0108 20:53:27.555474   35097 command_runner.go:130] > # no_pivot = false
	I0108 20:53:27.555492   35097 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0108 20:53:27.555502   35097 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0108 20:53:27.555510   35097 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0108 20:53:27.555518   35097 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0108 20:53:27.555526   35097 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0108 20:53:27.555537   35097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:53:27.555546   35097 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0108 20:53:27.555559   35097 command_runner.go:130] > # Cgroup setting for conmon
	I0108 20:53:27.555572   35097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0108 20:53:27.555583   35097 command_runner.go:130] > conmon_cgroup = "pod"
	I0108 20:53:27.555597   35097 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0108 20:53:27.555613   35097 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0108 20:53:27.555622   35097 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0108 20:53:27.555628   35097 command_runner.go:130] > conmon_env = [
	I0108 20:53:27.555638   35097 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0108 20:53:27.555643   35097 command_runner.go:130] > ]
	I0108 20:53:27.555652   35097 command_runner.go:130] > # Additional environment variables to set for all the
	I0108 20:53:27.555663   35097 command_runner.go:130] > # containers. These are overridden if set in the
	I0108 20:53:27.555674   35097 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0108 20:53:27.555683   35097 command_runner.go:130] > # default_env = [
	I0108 20:53:27.555693   35097 command_runner.go:130] > # ]
	I0108 20:53:27.555704   35097 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0108 20:53:27.555714   35097 command_runner.go:130] > # selinux = false
	I0108 20:53:27.555724   35097 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0108 20:53:27.555754   35097 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0108 20:53:27.555770   35097 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0108 20:53:27.555778   35097 command_runner.go:130] > # seccomp_profile = ""
	I0108 20:53:27.555787   35097 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0108 20:53:27.555799   35097 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0108 20:53:27.555812   35097 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0108 20:53:27.555822   35097 command_runner.go:130] > # which might increase security.
	I0108 20:53:27.555832   35097 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0108 20:53:27.555842   35097 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0108 20:53:27.555854   35097 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0108 20:53:27.555864   35097 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0108 20:53:27.555877   35097 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0108 20:53:27.555888   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:53:27.555896   35097 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0108 20:53:27.555905   35097 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0108 20:53:27.555913   35097 command_runner.go:130] > # the cgroup blockio controller.
	I0108 20:53:27.555920   35097 command_runner.go:130] > # blockio_config_file = ""
	I0108 20:53:27.555930   35097 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0108 20:53:27.555940   35097 command_runner.go:130] > # irqbalance daemon.
	I0108 20:53:27.555949   35097 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0108 20:53:27.555959   35097 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0108 20:53:27.555970   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:53:27.555980   35097 command_runner.go:130] > # rdt_config_file = ""
	I0108 20:53:27.555991   35097 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0108 20:53:27.556004   35097 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0108 20:53:27.556016   35097 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0108 20:53:27.556028   35097 command_runner.go:130] > # separate_pull_cgroup = ""
	I0108 20:53:27.556040   35097 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0108 20:53:27.556051   35097 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0108 20:53:27.556057   35097 command_runner.go:130] > # will be added.
	I0108 20:53:27.556065   35097 command_runner.go:130] > # default_capabilities = [
	I0108 20:53:27.556069   35097 command_runner.go:130] > # 	"CHOWN",
	I0108 20:53:27.556075   35097 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0108 20:53:27.556079   35097 command_runner.go:130] > # 	"FSETID",
	I0108 20:53:27.556083   35097 command_runner.go:130] > # 	"FOWNER",
	I0108 20:53:27.556100   35097 command_runner.go:130] > # 	"SETGID",
	I0108 20:53:27.556107   35097 command_runner.go:130] > # 	"SETUID",
	I0108 20:53:27.556113   35097 command_runner.go:130] > # 	"SETPCAP",
	I0108 20:53:27.556120   35097 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0108 20:53:27.556130   35097 command_runner.go:130] > # 	"KILL",
	I0108 20:53:27.556137   35097 command_runner.go:130] > # ]
	I0108 20:53:27.556150   35097 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0108 20:53:27.556162   35097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:53:27.556171   35097 command_runner.go:130] > # default_sysctls = [
	I0108 20:53:27.556177   35097 command_runner.go:130] > # ]
	I0108 20:53:27.556191   35097 command_runner.go:130] > # List of devices on the host that a
	I0108 20:53:27.556205   35097 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0108 20:53:27.556215   35097 command_runner.go:130] > # allowed_devices = [
	I0108 20:53:27.556224   35097 command_runner.go:130] > # 	"/dev/fuse",
	I0108 20:53:27.556231   35097 command_runner.go:130] > # ]
	I0108 20:53:27.556242   35097 command_runner.go:130] > # List of additional devices. specified as
	I0108 20:53:27.556257   35097 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0108 20:53:27.556270   35097 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0108 20:53:27.556296   35097 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0108 20:53:27.556310   35097 command_runner.go:130] > # additional_devices = [
	I0108 20:53:27.556317   35097 command_runner.go:130] > # ]
	I0108 20:53:27.556326   35097 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0108 20:53:27.556336   35097 command_runner.go:130] > # cdi_spec_dirs = [
	I0108 20:53:27.556343   35097 command_runner.go:130] > # 	"/etc/cdi",
	I0108 20:53:27.556353   35097 command_runner.go:130] > # 	"/var/run/cdi",
	I0108 20:53:27.556359   35097 command_runner.go:130] > # ]
	I0108 20:53:27.556372   35097 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0108 20:53:27.556382   35097 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0108 20:53:27.556386   35097 command_runner.go:130] > # Defaults to false.
	I0108 20:53:27.556395   35097 command_runner.go:130] > # device_ownership_from_security_context = false
	I0108 20:53:27.556409   35097 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0108 20:53:27.556423   35097 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0108 20:53:27.556430   35097 command_runner.go:130] > # hooks_dir = [
	I0108 20:53:27.556439   35097 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0108 20:53:27.556445   35097 command_runner.go:130] > # ]
	I0108 20:53:27.556458   35097 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0108 20:53:27.556471   35097 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0108 20:53:27.556483   35097 command_runner.go:130] > # its default mounts from the following two files:
	I0108 20:53:27.556490   35097 command_runner.go:130] > #
	I0108 20:53:27.556496   35097 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0108 20:53:27.556509   35097 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0108 20:53:27.556522   35097 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0108 20:53:27.556528   35097 command_runner.go:130] > #
	I0108 20:53:27.556542   35097 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0108 20:53:27.556556   35097 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0108 20:53:27.556570   35097 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0108 20:53:27.556582   35097 command_runner.go:130] > #      only add mounts it finds in this file.
	I0108 20:53:27.556589   35097 command_runner.go:130] > #
	I0108 20:53:27.556593   35097 command_runner.go:130] > # default_mounts_file = ""
	I0108 20:53:27.556605   35097 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0108 20:53:27.556619   35097 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0108 20:53:27.556630   35097 command_runner.go:130] > pids_limit = 1024
	I0108 20:53:27.556643   35097 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0108 20:53:27.556656   35097 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0108 20:53:27.556670   35097 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0108 20:53:27.556685   35097 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0108 20:53:27.556693   35097 command_runner.go:130] > # log_size_max = -1
	I0108 20:53:27.556708   35097 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0108 20:53:27.556719   35097 command_runner.go:130] > # log_to_journald = false
	I0108 20:53:27.556733   35097 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0108 20:53:27.556749   35097 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0108 20:53:27.556760   35097 command_runner.go:130] > # Path to directory for container attach sockets.
	I0108 20:53:27.556772   35097 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0108 20:53:27.556784   35097 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0108 20:53:27.556789   35097 command_runner.go:130] > # bind_mount_prefix = ""
	I0108 20:53:27.556801   35097 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0108 20:53:27.556813   35097 command_runner.go:130] > # read_only = false
	I0108 20:53:27.556826   35097 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0108 20:53:27.556840   35097 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0108 20:53:27.556850   35097 command_runner.go:130] > # live configuration reload.
	I0108 20:53:27.556860   35097 command_runner.go:130] > # log_level = "info"
	I0108 20:53:27.556872   35097 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0108 20:53:27.556881   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:53:27.556890   35097 command_runner.go:130] > # log_filter = ""
	I0108 20:53:27.556904   35097 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0108 20:53:27.556917   35097 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0108 20:53:27.556927   35097 command_runner.go:130] > # separated by comma.
	I0108 20:53:27.556937   35097 command_runner.go:130] > # uid_mappings = ""
	I0108 20:53:27.556950   35097 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0108 20:53:27.556963   35097 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0108 20:53:27.556973   35097 command_runner.go:130] > # separated by comma.
	I0108 20:53:27.556982   35097 command_runner.go:130] > # gid_mappings = ""
	I0108 20:53:27.556992   35097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0108 20:53:27.557005   35097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:53:27.557020   35097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:53:27.557031   35097 command_runner.go:130] > # minimum_mappable_uid = -1
	I0108 20:53:27.557045   35097 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0108 20:53:27.557057   35097 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0108 20:53:27.557072   35097 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0108 20:53:27.557082   35097 command_runner.go:130] > # minimum_mappable_gid = -1
	I0108 20:53:27.557090   35097 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0108 20:53:27.557103   35097 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0108 20:53:27.557117   35097 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0108 20:53:27.557128   35097 command_runner.go:130] > # ctr_stop_timeout = 30
	I0108 20:53:27.557141   35097 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0108 20:53:27.557153   35097 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0108 20:53:27.557165   35097 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0108 20:53:27.557176   35097 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0108 20:53:27.557184   35097 command_runner.go:130] > drop_infra_ctr = false
	I0108 20:53:27.557196   35097 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0108 20:53:27.557210   35097 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0108 20:53:27.557225   35097 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0108 20:53:27.557236   35097 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0108 20:53:27.557248   35097 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0108 20:53:27.557261   35097 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0108 20:53:27.557269   35097 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0108 20:53:27.557282   35097 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0108 20:53:27.557293   35097 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0108 20:53:27.557307   35097 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0108 20:53:27.557321   35097 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0108 20:53:27.557334   35097 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0108 20:53:27.557345   35097 command_runner.go:130] > # default_runtime = "runc"
	I0108 20:53:27.557354   35097 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0108 20:53:27.557366   35097 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0108 20:53:27.557387   35097 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0108 20:53:27.557399   35097 command_runner.go:130] > # creation as a file is not desired either.
	I0108 20:53:27.557414   35097 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0108 20:53:27.557426   35097 command_runner.go:130] > # the hostname is being managed dynamically.
	I0108 20:53:27.557436   35097 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0108 20:53:27.557442   35097 command_runner.go:130] > # ]
	I0108 20:53:27.557451   35097 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0108 20:53:27.557466   35097 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0108 20:53:27.557480   35097 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0108 20:53:27.557493   35097 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0108 20:53:27.557502   35097 command_runner.go:130] > #
	I0108 20:53:27.557510   35097 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0108 20:53:27.557521   35097 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0108 20:53:27.557527   35097 command_runner.go:130] > #  runtime_type = "oci"
	I0108 20:53:27.557532   35097 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0108 20:53:27.557543   35097 command_runner.go:130] > #  privileged_without_host_devices = false
	I0108 20:53:27.557554   35097 command_runner.go:130] > #  allowed_annotations = []
	I0108 20:53:27.557561   35097 command_runner.go:130] > # Where:
	I0108 20:53:27.557573   35097 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0108 20:53:27.557587   35097 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0108 20:53:27.557600   35097 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0108 20:53:27.557613   35097 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0108 20:53:27.557622   35097 command_runner.go:130] > #   in $PATH.
	I0108 20:53:27.557628   35097 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0108 20:53:27.557640   35097 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0108 20:53:27.557654   35097 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0108 20:53:27.557664   35097 command_runner.go:130] > #   state.
	I0108 20:53:27.557678   35097 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0108 20:53:27.557690   35097 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0108 20:53:27.557703   35097 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0108 20:53:27.557716   35097 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0108 20:53:27.557725   35097 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0108 20:53:27.557743   35097 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0108 20:53:27.557754   35097 command_runner.go:130] > #   The currently recognized values are:
	I0108 20:53:27.557765   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0108 20:53:27.557781   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0108 20:53:27.557791   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0108 20:53:27.557802   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0108 20:53:27.557818   35097 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0108 20:53:27.557829   35097 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0108 20:53:27.557837   35097 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0108 20:53:27.557844   35097 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0108 20:53:27.557852   35097 command_runner.go:130] > #   should be moved to the container's cgroup
	I0108 20:53:27.557856   35097 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0108 20:53:27.557861   35097 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0108 20:53:27.557866   35097 command_runner.go:130] > runtime_type = "oci"
	I0108 20:53:27.557876   35097 command_runner.go:130] > runtime_root = "/run/runc"
	I0108 20:53:27.557883   35097 command_runner.go:130] > runtime_config_path = ""
	I0108 20:53:27.557892   35097 command_runner.go:130] > monitor_path = ""
	I0108 20:53:27.557899   35097 command_runner.go:130] > monitor_cgroup = ""
	I0108 20:53:27.557910   35097 command_runner.go:130] > monitor_exec_cgroup = ""
	I0108 20:53:27.557923   35097 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0108 20:53:27.557933   35097 command_runner.go:130] > # running containers
	I0108 20:53:27.557944   35097 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0108 20:53:27.557958   35097 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0108 20:53:27.558023   35097 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0108 20:53:27.558040   35097 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0108 20:53:27.558047   35097 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0108 20:53:27.558052   35097 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0108 20:53:27.558056   35097 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0108 20:53:27.558066   35097 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0108 20:53:27.558071   35097 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0108 20:53:27.558080   35097 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0108 20:53:27.558086   35097 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0108 20:53:27.558094   35097 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0108 20:53:27.558100   35097 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0108 20:53:27.558110   35097 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0108 20:53:27.558120   35097 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0108 20:53:27.558127   35097 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0108 20:53:27.558143   35097 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0108 20:53:27.558159   35097 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0108 20:53:27.558172   35097 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0108 20:53:27.558186   35097 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0108 20:53:27.558195   35097 command_runner.go:130] > # Example:
	I0108 20:53:27.558204   35097 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0108 20:53:27.558215   35097 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0108 20:53:27.558222   35097 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0108 20:53:27.558227   35097 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0108 20:53:27.558231   35097 command_runner.go:130] > # cpuset = 0
	I0108 20:53:27.558236   35097 command_runner.go:130] > # cpushares = "0-1"
	I0108 20:53:27.558244   35097 command_runner.go:130] > # Where:
	I0108 20:53:27.558249   35097 command_runner.go:130] > # The workload name is workload-type.
	I0108 20:53:27.558255   35097 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0108 20:53:27.558263   35097 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0108 20:53:27.558269   35097 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0108 20:53:27.558278   35097 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0108 20:53:27.558284   35097 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0108 20:53:27.558290   35097 command_runner.go:130] > # 
	I0108 20:53:27.558296   35097 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0108 20:53:27.558301   35097 command_runner.go:130] > #
	I0108 20:53:27.558307   35097 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0108 20:53:27.558315   35097 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0108 20:53:27.558321   35097 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0108 20:53:27.558327   35097 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0108 20:53:27.558335   35097 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0108 20:53:27.558339   35097 command_runner.go:130] > [crio.image]
	I0108 20:53:27.558346   35097 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0108 20:53:27.558352   35097 command_runner.go:130] > # default_transport = "docker://"
	I0108 20:53:27.558359   35097 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0108 20:53:27.558367   35097 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:53:27.558371   35097 command_runner.go:130] > # global_auth_file = ""
	I0108 20:53:27.558376   35097 command_runner.go:130] > # The image used to instantiate infra containers.
	I0108 20:53:27.558382   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:53:27.558387   35097 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0108 20:53:27.558395   35097 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0108 20:53:27.558400   35097 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0108 20:53:27.558408   35097 command_runner.go:130] > # This option supports live configuration reload.
	I0108 20:53:27.558412   35097 command_runner.go:130] > # pause_image_auth_file = ""
	I0108 20:53:27.558419   35097 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0108 20:53:27.558425   35097 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0108 20:53:27.558433   35097 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0108 20:53:27.558439   35097 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0108 20:53:27.558445   35097 command_runner.go:130] > # pause_command = "/pause"
	I0108 20:53:27.558451   35097 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0108 20:53:27.558460   35097 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0108 20:53:27.558468   35097 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0108 20:53:27.558474   35097 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0108 20:53:27.558482   35097 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0108 20:53:27.558488   35097 command_runner.go:130] > # signature_policy = ""
	I0108 20:53:27.558495   35097 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0108 20:53:27.558503   35097 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0108 20:53:27.558507   35097 command_runner.go:130] > # changing them here.
	I0108 20:53:27.558511   35097 command_runner.go:130] > # insecure_registries = [
	I0108 20:53:27.558517   35097 command_runner.go:130] > # ]
	I0108 20:53:27.558523   35097 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0108 20:53:27.558531   35097 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0108 20:53:27.558535   35097 command_runner.go:130] > # image_volumes = "mkdir"
	I0108 20:53:27.558542   35097 command_runner.go:130] > # Temporary directory to use for storing big files
	I0108 20:53:27.558546   35097 command_runner.go:130] > # big_files_temporary_dir = ""
	I0108 20:53:27.558554   35097 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0108 20:53:27.558560   35097 command_runner.go:130] > # CNI plugins.
	I0108 20:53:27.558564   35097 command_runner.go:130] > [crio.network]
	I0108 20:53:27.558574   35097 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0108 20:53:27.558579   35097 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0108 20:53:27.558586   35097 command_runner.go:130] > # cni_default_network = ""
	I0108 20:53:27.558591   35097 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0108 20:53:27.558598   35097 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0108 20:53:27.558603   35097 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0108 20:53:27.558610   35097 command_runner.go:130] > # plugin_dirs = [
	I0108 20:53:27.558614   35097 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0108 20:53:27.558620   35097 command_runner.go:130] > # ]
	I0108 20:53:27.558626   35097 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0108 20:53:27.558631   35097 command_runner.go:130] > [crio.metrics]
	I0108 20:53:27.558636   35097 command_runner.go:130] > # Globally enable or disable metrics support.
	I0108 20:53:27.558643   35097 command_runner.go:130] > enable_metrics = true
	I0108 20:53:27.558647   35097 command_runner.go:130] > # Specify enabled metrics collectors.
	I0108 20:53:27.558654   35097 command_runner.go:130] > # Per default all metrics are enabled.
	I0108 20:53:27.558660   35097 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0108 20:53:27.558668   35097 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0108 20:53:27.558674   35097 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0108 20:53:27.558681   35097 command_runner.go:130] > # metrics_collectors = [
	I0108 20:53:27.558685   35097 command_runner.go:130] > # 	"operations",
	I0108 20:53:27.558691   35097 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0108 20:53:27.558696   35097 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0108 20:53:27.558703   35097 command_runner.go:130] > # 	"operations_errors",
	I0108 20:53:27.558707   35097 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0108 20:53:27.558713   35097 command_runner.go:130] > # 	"image_pulls_by_name",
	I0108 20:53:27.558718   35097 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0108 20:53:27.558724   35097 command_runner.go:130] > # 	"image_pulls_failures",
	I0108 20:53:27.558729   35097 command_runner.go:130] > # 	"image_pulls_successes",
	I0108 20:53:27.558739   35097 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0108 20:53:27.558744   35097 command_runner.go:130] > # 	"image_layer_reuse",
	I0108 20:53:27.558750   35097 command_runner.go:130] > # 	"containers_oom_total",
	I0108 20:53:27.558754   35097 command_runner.go:130] > # 	"containers_oom",
	I0108 20:53:27.558758   35097 command_runner.go:130] > # 	"processes_defunct",
	I0108 20:53:27.558762   35097 command_runner.go:130] > # 	"operations_total",
	I0108 20:53:27.558767   35097 command_runner.go:130] > # 	"operations_latency_seconds",
	I0108 20:53:27.558773   35097 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0108 20:53:27.558779   35097 command_runner.go:130] > # 	"operations_errors_total",
	I0108 20:53:27.558784   35097 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0108 20:53:27.558788   35097 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0108 20:53:27.558795   35097 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0108 20:53:27.558799   35097 command_runner.go:130] > # 	"image_pulls_success_total",
	I0108 20:53:27.558808   35097 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0108 20:53:27.558812   35097 command_runner.go:130] > # 	"containers_oom_count_total",
	I0108 20:53:27.558817   35097 command_runner.go:130] > # ]
	I0108 20:53:27.558822   35097 command_runner.go:130] > # The port on which the metrics server will listen.
	I0108 20:53:27.558826   35097 command_runner.go:130] > # metrics_port = 9090
	I0108 20:53:27.558833   35097 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0108 20:53:27.558838   35097 command_runner.go:130] > # metrics_socket = ""
	I0108 20:53:27.558843   35097 command_runner.go:130] > # The certificate for the secure metrics server.
	I0108 20:53:27.558852   35097 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0108 20:53:27.558860   35097 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0108 20:53:27.558867   35097 command_runner.go:130] > # certificate on any modification event.
	I0108 20:53:27.558871   35097 command_runner.go:130] > # metrics_cert = ""
	I0108 20:53:27.558878   35097 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0108 20:53:27.558884   35097 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0108 20:53:27.558890   35097 command_runner.go:130] > # metrics_key = ""
	I0108 20:53:27.558896   35097 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0108 20:53:27.558902   35097 command_runner.go:130] > [crio.tracing]
	I0108 20:53:27.558907   35097 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0108 20:53:27.558914   35097 command_runner.go:130] > # enable_tracing = false
	I0108 20:53:27.558919   35097 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0108 20:53:27.558926   35097 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0108 20:53:27.558931   35097 command_runner.go:130] > # Number of samples to collect per million spans.
	I0108 20:53:27.558937   35097 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0108 20:53:27.558943   35097 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0108 20:53:27.558949   35097 command_runner.go:130] > [crio.stats]
	I0108 20:53:27.558955   35097 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0108 20:53:27.558962   35097 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0108 20:53:27.558969   35097 command_runner.go:130] > # stats_collection_period = 0
	I0108 20:53:27.559001   35097 command_runner.go:130] ! time="2024-01-08 20:53:27.537727985Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0108 20:53:27.559014   35097 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0108 20:53:27.559067   35097 cni.go:84] Creating CNI manager for ""
	I0108 20:53:27.559077   35097 cni.go:136] 3 nodes found, recommending kindnet
	I0108 20:53:27.559085   35097 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 20:53:27.559102   35097 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-340815 NodeName:multinode-340815-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 20:53:27.559197   35097 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-340815-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 20:53:27.559242   35097 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-340815-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 20:53:27.559288   35097 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 20:53:27.569671   35097 command_runner.go:130] > kubeadm
	I0108 20:53:27.569698   35097 command_runner.go:130] > kubectl
	I0108 20:53:27.569705   35097 command_runner.go:130] > kubelet
	I0108 20:53:27.569761   35097 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 20:53:27.569829   35097 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 20:53:27.578853   35097 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0108 20:53:27.597970   35097 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 20:53:27.615546   35097 ssh_runner.go:195] Run: grep 192.168.39.196	control-plane.minikube.internal$ /etc/hosts
	I0108 20:53:27.619370   35097 command_runner.go:130] > 192.168.39.196	control-plane.minikube.internal
	I0108 20:53:27.619510   35097 host.go:66] Checking if "multinode-340815" exists ...
	I0108 20:53:27.619766   35097 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:53:27.619939   35097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:53:27.619983   35097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:53:27.634536   35097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39045
	I0108 20:53:27.634905   35097 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:53:27.635355   35097 main.go:141] libmachine: Using API Version  1
	I0108 20:53:27.635377   35097 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:53:27.635680   35097 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:53:27.635855   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:53:27.635988   35097 start.go:304] JoinCluster: &{Name:multinode-340815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-340815 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:53:27.636160   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 20:53:27.636185   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:53:27.638812   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:53:27.639188   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:53:27.639206   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:53:27.639312   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:53:27.639460   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:53:27.639598   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:53:27.639704   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:53:27.820038   35097 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 60mle3.0vk9u79j4zcxh0cm --discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 
	I0108 20:53:27.822292   35097 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 20:53:27.822334   35097 host.go:66] Checking if "multinode-340815" exists ...
	I0108 20:53:27.822649   35097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:53:27.822686   35097 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:53:27.837029   35097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0108 20:53:27.837534   35097 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:53:27.838012   35097 main.go:141] libmachine: Using API Version  1
	I0108 20:53:27.838030   35097 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:53:27.838336   35097 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:53:27.838521   35097 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:53:27.838712   35097 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-340815-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0108 20:53:27.838734   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:53:27.841502   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:53:27.841851   35097 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:49:02 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:53:27.841878   35097 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:53:27.842031   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:53:27.842233   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:53:27.842410   35097 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:53:27.842533   35097 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:53:28.036816   35097 command_runner.go:130] > node/multinode-340815-m03 cordoned
	I0108 20:53:31.086523   35097 command_runner.go:130] > pod "busybox-5bc68d56bd-jqqkf" has DeletionTimestamp older than 1 seconds, skipping
	I0108 20:53:31.086555   35097 command_runner.go:130] > node/multinode-340815-m03 drained
	I0108 20:53:31.088163   35097 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0108 20:53:31.088188   35097 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-wfgln, kube-system/kube-proxy-lxkrv
	I0108 20:53:31.088224   35097 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-340815-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.249479409s)
	I0108 20:53:31.088247   35097 node.go:108] successfully drained node "m03"
	I0108 20:53:31.088600   35097 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:53:31.088804   35097 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:53:31.089077   35097 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0108 20:53:31.089121   35097 round_trippers.go:463] DELETE https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:53:31.089132   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:31.089140   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:31.089145   35097 round_trippers.go:473]     Content-Type: application/json
	I0108 20:53:31.089153   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:31.104165   35097 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0108 20:53:31.104186   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:31.104195   35097 round_trippers.go:580]     Content-Length: 171
	I0108 20:53:31.104204   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:31 GMT
	I0108 20:53:31.104213   35097 round_trippers.go:580]     Audit-Id: bcb80fe3-9597-4ee9-b588-d1320effc406
	I0108 20:53:31.104221   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:31.104230   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:31.104238   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:31.104246   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:31.104275   35097 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-340815-m03","kind":"nodes","uid":"f402a58c-763c-4188-b0f9-533674f03d66"}}
	I0108 20:53:31.104308   35097 node.go:124] successfully deleted node "m03"
	I0108 20:53:31.104322   35097 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 20:53:31.104344   35097 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 20:53:31.104367   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 60mle3.0vk9u79j4zcxh0cm --discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-340815-m03"
	I0108 20:53:31.162284   35097 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 20:53:31.345235   35097 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 20:53:31.345284   35097 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 20:53:31.409380   35097 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 20:53:31.409405   35097 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 20:53:31.409411   35097 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 20:53:31.556125   35097 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 20:53:32.083433   35097 command_runner.go:130] > This node has joined the cluster:
	I0108 20:53:32.083460   35097 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 20:53:32.083471   35097 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 20:53:32.083477   35097 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 20:53:32.086525   35097 command_runner.go:130] ! W0108 20:53:31.156756    2359 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0108 20:53:32.086552   35097 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0108 20:53:32.086566   35097 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0108 20:53:32.086579   35097 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0108 20:53:32.086609   35097 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 20:53:32.364639   35097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=multinode-340815 minikube.k8s.io/updated_at=2024_01_08T20_53_32_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 20:53:32.484150   35097 command_runner.go:130] > node/multinode-340815-m02 labeled
	I0108 20:53:32.501946   35097 command_runner.go:130] > node/multinode-340815-m03 labeled
	I0108 20:53:32.503963   35097 start.go:306] JoinCluster complete in 4.867971815s
	I0108 20:53:32.503990   35097 cni.go:84] Creating CNI manager for ""
	I0108 20:53:32.504002   35097 cni.go:136] 3 nodes found, recommending kindnet
	I0108 20:53:32.504066   35097 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 20:53:32.510732   35097 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 20:53:32.510764   35097 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 20:53:32.510774   35097 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 20:53:32.510784   35097 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 20:53:32.510794   35097 command_runner.go:130] > Access: 2024-01-08 20:49:02.982432026 +0000
	I0108 20:53:32.510806   35097 command_runner.go:130] > Modify: 2023-12-16 11:53:47.000000000 +0000
	I0108 20:53:32.510817   35097 command_runner.go:130] > Change: 2024-01-08 20:49:01.008432026 +0000
	I0108 20:53:32.510825   35097 command_runner.go:130] >  Birth: -
	I0108 20:53:32.511561   35097 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 20:53:32.511577   35097 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 20:53:32.536879   35097 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 20:53:32.890000   35097 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:53:32.894019   35097 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 20:53:32.896549   35097 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 20:53:32.906053   35097 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 20:53:32.908947   35097 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:53:32.909155   35097 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:53:32.909423   35097 round_trippers.go:463] GET https://192.168.39.196:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 20:53:32.909433   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:32.909440   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:32.909446   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:32.912664   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:53:32.912681   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:32.912687   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:32.912692   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:32.912698   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:32.912703   35097 round_trippers.go:580]     Content-Length: 291
	I0108 20:53:32.912708   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:32 GMT
	I0108 20:53:32.912717   35097 round_trippers.go:580]     Audit-Id: e3d1c756-81b1-475f-a59a-5339bb918fb1
	I0108 20:53:32.912722   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:32.912740   35097 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a90ea09-afeb-4dda-ab10-18a22e37ea78","resourceVersion":"928","creationTimestamp":"2024-01-08T20:38:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 20:53:32.912822   35097 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-340815" context rescaled to 1 replicas
	I0108 20:53:32.912847   35097 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.249 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0108 20:53:32.915152   35097 out.go:177] * Verifying Kubernetes components...
	I0108 20:53:32.916718   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:53:32.930305   35097 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:53:32.930533   35097 kapi.go:59] client config for multinode-340815: &rest.Config{Host:"https://192.168.39.196:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/multinode-340815/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 20:53:32.930745   35097 node_ready.go:35] waiting up to 6m0s for node "multinode-340815-m03" to be "Ready" ...
	I0108 20:53:32.930828   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:53:32.930837   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:32.930845   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:32.930851   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:32.934402   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:53:32.934425   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:32.934432   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:32 GMT
	I0108 20:53:32.934438   35097 round_trippers.go:580]     Audit-Id: a06b260f-2279-460b-b21a-7d8eb373807d
	I0108 20:53:32.934443   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:32.934448   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:32.934453   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:32.934459   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:32.935252   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m03","uid":"fb749479-6a15-4578-8297-636a252d0498","resourceVersion":"1269","creationTimestamp":"2024-01-08T20:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_53_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:53:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 20:53:32.935606   35097 node_ready.go:49] node "multinode-340815-m03" has status "Ready":"True"
	I0108 20:53:32.935625   35097 node_ready.go:38] duration metric: took 4.865984ms waiting for node "multinode-340815-m03" to be "Ready" ...
	I0108 20:53:32.935636   35097 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:53:32.935708   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0108 20:53:32.935719   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:32.935729   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:32.935741   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:32.943777   35097 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 20:53:32.943798   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:32.943805   35097 round_trippers.go:580]     Audit-Id: 20df58a1-2ef0-4583-954e-f17d44b63a70
	I0108 20:53:32.943810   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:32.943815   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:32.943828   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:32.943837   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:32.943846   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:32 GMT
	I0108 20:53:32.944879   35097 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1275"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"924","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82237 chars]
	I0108 20:53:32.947429   35097 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:32.947507   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h4v6v
	I0108 20:53:32.947518   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:32.947526   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:32.947532   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:32.949771   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:53:32.949786   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:32.949792   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:32.949798   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:32.949803   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:32 GMT
	I0108 20:53:32.949808   35097 round_trippers.go:580]     Audit-Id: 93990157-5159-4762-8c22-11e6da4c3638
	I0108 20:53:32.949812   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:32.949818   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:32.950042   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h4v6v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"5c1ccbb8-1747-4b6f-b40c-c54670e49d54","resourceVersion":"924","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"ed179286-fa42-41ff-991d-84b09f8a405f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ed179286-fa42-41ff-991d-84b09f8a405f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0108 20:53:32.950563   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:53:32.950578   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:32.950588   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:32.950596   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:32.953166   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:53:32.953183   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:32.953190   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:32.953195   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:32 GMT
	I0108 20:53:32.953200   35097 round_trippers.go:580]     Audit-Id: dc00b80c-b6dc-4942-9e65-9cf244c5d404
	I0108 20:53:32.953211   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:32.953216   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:32.953221   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:32.953486   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:53:32.953879   35097 pod_ready.go:92] pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace has status "Ready":"True"
	I0108 20:53:32.953898   35097 pod_ready.go:81] duration metric: took 6.447019ms waiting for pod "coredns-5dd5756b68-h4v6v" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:32.953910   35097 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:32.953974   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-340815
	I0108 20:53:32.953984   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:32.953995   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:32.954004   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:32.956084   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:53:32.956115   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:32.956126   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:32.956136   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:32.956144   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:32 GMT
	I0108 20:53:32.956152   35097 round_trippers.go:580]     Audit-Id: a9518619-f53b-48e2-bd9f-3e66ed06cfd9
	I0108 20:53:32.956158   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:32.956167   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:32.956411   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-340815","namespace":"kube-system","uid":"c6d1e2c4-6dbc-4495-ac68-c4b030195c2c","resourceVersion":"916","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.196:2379","kubernetes.io/config.hash":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.mirror":"84677478c7d9bd76d7500f07832cd213","kubernetes.io/config.seen":"2024-01-08T20:38:05.870869333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0108 20:53:32.956842   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:53:32.956859   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:32.956869   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:32.956878   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:32.958943   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:53:32.958959   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:32.958966   35097 round_trippers.go:580]     Audit-Id: c3b0d86b-75d1-43fe-9751-a13e1c3bf771
	I0108 20:53:32.958974   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:32.958981   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:32.958989   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:32.959002   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:32.959018   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:32 GMT
	I0108 20:53:32.959208   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:53:32.959583   35097 pod_ready.go:92] pod "etcd-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:53:32.959603   35097 pod_ready.go:81] duration metric: took 5.685278ms waiting for pod "etcd-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:32.959626   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:32.959697   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-340815
	I0108 20:53:32.959710   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:32.959720   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:32.959733   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:32.961741   35097 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 20:53:32.961755   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:32.961764   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:32 GMT
	I0108 20:53:32.961771   35097 round_trippers.go:580]     Audit-Id: 39e68bbc-3874-49ed-8d67-b9939c4d97b7
	I0108 20:53:32.961779   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:32.961788   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:32.961798   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:32.961813   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:32.962060   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-340815","namespace":"kube-system","uid":"523b3dcf-2fae-43b4-a9c6-cd2337ae6d6f","resourceVersion":"914","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.196:8443","kubernetes.io/config.hash":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.mirror":"5a9f4acc9b0ffa502cc0493a6d857b92","kubernetes.io/config.seen":"2024-01-08T20:38:05.870870627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0108 20:53:32.962397   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:53:32.962410   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:32.962420   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:32.962429   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:32.964737   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:53:32.964758   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:32.964764   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:32.964769   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:32.964774   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:32.964779   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:32.964784   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:32 GMT
	I0108 20:53:32.964789   35097 round_trippers.go:580]     Audit-Id: c11d75cd-d5aa-4911-b231-be7f2d341ae5
	I0108 20:53:32.965373   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:53:32.965746   35097 pod_ready.go:92] pod "kube-apiserver-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:53:32.965762   35097 pod_ready.go:81] duration metric: took 6.116769ms waiting for pod "kube-apiserver-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:32.965770   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:32.965819   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-340815
	I0108 20:53:32.965826   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:32.965832   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:32.965838   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:32.968052   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:53:32.968072   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:32.968080   35097 round_trippers.go:580]     Audit-Id: 407b9d40-06b0-459e-92d3-0f0fd8adf9e3
	I0108 20:53:32.968104   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:32.968115   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:32.968123   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:32.968132   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:32.968142   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:32 GMT
	I0108 20:53:32.968565   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-340815","namespace":"kube-system","uid":"3b29ca3f-d23b-4add-a5fb-d59381398862","resourceVersion":"912","creationTimestamp":"2024-01-08T20:38:05Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.mirror":"1f741652d6560a2396658aaab123d801","kubernetes.io/config.seen":"2024-01-08T20:37:56.785419514Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0108 20:53:32.969117   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:53:32.969136   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:32.969146   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:32.969155   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:32.973562   35097 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 20:53:32.973580   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:32.973589   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:32.973597   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:32 GMT
	I0108 20:53:32.973605   35097 round_trippers.go:580]     Audit-Id: df764a8c-f909-48da-a002-2134dcabbcc3
	I0108 20:53:32.973612   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:32.973620   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:32.973629   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:32.973822   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:53:32.974208   35097 pod_ready.go:92] pod "kube-controller-manager-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:53:32.974232   35097 pod_ready.go:81] duration metric: took 8.45507ms waiting for pod "kube-controller-manager-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:32.974245   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j5w6d" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:33.131636   35097 request.go:629] Waited for 157.325259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:53:33.131712   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j5w6d
	I0108 20:53:33.131719   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:33.131730   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:33.131749   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:33.134930   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:53:33.134956   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:33.134967   35097 round_trippers.go:580]     Audit-Id: 1a4b318a-e24d-4c69-9c16-e9eb15e0ad69
	I0108 20:53:33.134976   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:33.134984   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:33.134993   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:33.135001   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:33.135009   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:33 GMT
	I0108 20:53:33.135425   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-j5w6d","generateName":"kube-proxy-","namespace":"kube-system","uid":"61568130-b69e-48ce-86f0-9a9e63ed99ab","resourceVersion":"1103","creationTimestamp":"2024-01-08T20:39:57Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:39:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0108 20:53:33.331195   35097 request.go:629] Waited for 195.363109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:53:33.331272   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m02
	I0108 20:53:33.331281   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:33.331292   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:33.331306   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:33.334090   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:53:33.334110   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:33.334121   35097 round_trippers.go:580]     Audit-Id: 2d0238e1-ebd4-40c0-a1d0-25e176eb4147
	I0108 20:53:33.334130   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:33.334138   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:33.334150   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:33.334157   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:33.334165   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:33 GMT
	I0108 20:53:33.334392   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m02","uid":"a3509707-a676-45da-aba0-ccedece9b18c","resourceVersion":"1268","creationTimestamp":"2024-01-08T20:51:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_53_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:51:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0108 20:53:33.334743   35097 pod_ready.go:92] pod "kube-proxy-j5w6d" in "kube-system" namespace has status "Ready":"True"
	I0108 20:53:33.334761   35097 pod_ready.go:81] duration metric: took 360.508256ms waiting for pod "kube-proxy-j5w6d" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:33.334778   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lxkrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:33.531764   35097 request.go:629] Waited for 196.908366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lxkrv
	I0108 20:53:33.531829   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lxkrv
	I0108 20:53:33.531836   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:33.531846   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:33.531855   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:33.535729   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:53:33.535761   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:33.535772   35097 round_trippers.go:580]     Audit-Id: aed9a859-c3fb-4edc-9921-cb8f93dfc61b
	I0108 20:53:33.535781   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:33.535795   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:33.535807   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:33.535816   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:33.535825   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:33 GMT
	I0108 20:53:33.535953   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lxkrv","generateName":"kube-proxy-","namespace":"kube-system","uid":"d7fed398-b2ff-4ec4-a1a6-d0a7b8dca989","resourceVersion":"1273","creationTimestamp":"2024-01-08T20:40:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:40:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I0108 20:53:33.731893   35097 request.go:629] Waited for 195.379524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:53:33.731970   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:53:33.731977   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:33.731987   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:33.732022   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:33.735873   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:53:33.735897   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:33.735906   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:33.735913   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:33.735920   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:33.735927   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:33 GMT
	I0108 20:53:33.735935   35097 round_trippers.go:580]     Audit-Id: b8ef2b83-d786-4342-ae1f-c08f02cf40a7
	I0108 20:53:33.735950   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:33.736151   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m03","uid":"fb749479-6a15-4578-8297-636a252d0498","resourceVersion":"1269","creationTimestamp":"2024-01-08T20:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_53_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:53:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 20:53:33.931733   35097 request.go:629] Waited for 96.274622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lxkrv
	I0108 20:53:33.931803   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lxkrv
	I0108 20:53:33.931810   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:33.931820   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:33.931835   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:33.939063   35097 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 20:53:33.939084   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:33.939091   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:33 GMT
	I0108 20:53:33.939096   35097 round_trippers.go:580]     Audit-Id: e3899c7c-459e-4b5c-9a41-8e1da149d5b4
	I0108 20:53:33.939101   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:33.939108   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:33.939116   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:33.939123   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:33.939713   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lxkrv","generateName":"kube-proxy-","namespace":"kube-system","uid":"d7fed398-b2ff-4ec4-a1a6-d0a7b8dca989","resourceVersion":"1289","creationTimestamp":"2024-01-08T20:40:52Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:40:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0108 20:53:34.131510   35097 request.go:629] Waited for 191.358115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:53:34.131574   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815-m03
	I0108 20:53:34.131581   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:34.131591   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:34.131600   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:34.134977   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:53:34.134995   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:34.135002   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:34.135009   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:34 GMT
	I0108 20:53:34.135018   35097 round_trippers.go:580]     Audit-Id: 71bb02a0-7ac1-426c-aa23-8c172e075e44
	I0108 20:53:34.135028   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:34.135038   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:34.135048   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:34.135378   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815-m03","uid":"fb749479-6a15-4578-8297-636a252d0498","resourceVersion":"1269","creationTimestamp":"2024-01-08T20:53:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T20_53_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:53:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0108 20:53:34.135643   35097 pod_ready.go:92] pod "kube-proxy-lxkrv" in "kube-system" namespace has status "Ready":"True"
	I0108 20:53:34.135660   35097 pod_ready.go:81] duration metric: took 800.872815ms waiting for pod "kube-proxy-lxkrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:34.135673   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:34.331040   35097 request.go:629] Waited for 195.304238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9xrv
	I0108 20:53:34.331093   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-z9xrv
	I0108 20:53:34.331098   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:34.331112   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:34.331121   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:34.333879   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:53:34.333907   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:34.333917   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:34.333925   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:34.333934   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:34.333942   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:34 GMT
	I0108 20:53:34.333960   35097 round_trippers.go:580]     Audit-Id: 7a071960-fe61-4152-9bfd-68d3803d33ac
	I0108 20:53:34.333967   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:34.334119   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-z9xrv","generateName":"kube-proxy-","namespace":"kube-system","uid":"a0843325-2adf-4c2f-8489-067554648b52","resourceVersion":"810","creationTimestamp":"2024-01-08T20:38:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"272897b4-3da4-4cf1-b574-bb34c7269073","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"272897b4-3da4-4cf1-b574-bb34c7269073\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0108 20:53:34.530923   35097 request.go:629] Waited for 196.284258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:53:34.530993   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:53:34.530998   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:34.531006   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:34.531012   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:34.534203   35097 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 20:53:34.534233   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:34.534243   35097 round_trippers.go:580]     Audit-Id: 0cf215d2-5056-4905-af71-9a21a3f9d445
	I0108 20:53:34.534251   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:34.534259   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:34.534268   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:34.534276   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:34.534283   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:34 GMT
	I0108 20:53:34.534520   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:53:34.534927   35097 pod_ready.go:92] pod "kube-proxy-z9xrv" in "kube-system" namespace has status "Ready":"True"
	I0108 20:53:34.534949   35097 pod_ready.go:81] duration metric: took 399.268453ms waiting for pod "kube-proxy-z9xrv" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:34.534962   35097 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:34.731918   35097 request.go:629] Waited for 196.881672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:53:34.731974   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-340815
	I0108 20:53:34.731979   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:34.731986   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:34.731992   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:34.734713   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:53:34.734732   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:34.734739   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:34.734744   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:34.734749   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:34.734754   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:34 GMT
	I0108 20:53:34.734759   35097 round_trippers.go:580]     Audit-Id: 16904b75-d98a-40ca-9bb2-bc567d9063f5
	I0108 20:53:34.734764   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:34.734978   35097 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-340815","namespace":"kube-system","uid":"008c4fe8-78b1-4326-8452-215037af26d6","resourceVersion":"888","creationTimestamp":"2024-01-08T20:38:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.mirror":"0c87b92132627dab75791d3cff759e12","kubernetes.io/config.seen":"2024-01-08T20:38:05.870865233Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T20:38:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0108 20:53:34.931788   35097 request.go:629] Waited for 196.464621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:53:34.931873   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/multinode-340815
	I0108 20:53:34.931880   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:34.931892   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:34.931905   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:34.934662   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:53:34.934680   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:34.934687   35097 round_trippers.go:580]     Audit-Id: 11e26442-fe29-4f2e-a3fe-68b3f9f4ac60
	I0108 20:53:34.934693   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:34.934698   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:34.934703   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:34.934709   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:34.934714   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:34 GMT
	I0108 20:53:34.934884   35097 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-08T20:38:02Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0108 20:53:34.935176   35097 pod_ready.go:92] pod "kube-scheduler-multinode-340815" in "kube-system" namespace has status "Ready":"True"
	I0108 20:53:34.935188   35097 pod_ready.go:81] duration metric: took 400.21973ms waiting for pod "kube-scheduler-multinode-340815" in "kube-system" namespace to be "Ready" ...
	I0108 20:53:34.935197   35097 pod_ready.go:38] duration metric: took 1.999540417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 20:53:34.935210   35097 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 20:53:34.935253   35097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:53:34.950011   35097 system_svc.go:56] duration metric: took 14.791824ms WaitForService to wait for kubelet.
	I0108 20:53:34.950037   35097 kubeadm.go:581] duration metric: took 2.037167525s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 20:53:34.950059   35097 node_conditions.go:102] verifying NodePressure condition ...
	I0108 20:53:35.131508   35097 request.go:629] Waited for 181.376376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0108 20:53:35.131563   35097 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0108 20:53:35.131571   35097 round_trippers.go:469] Request Headers:
	I0108 20:53:35.131579   35097 round_trippers.go:473]     Accept: application/json, */*
	I0108 20:53:35.131590   35097 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 20:53:35.134570   35097 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 20:53:35.134589   35097 round_trippers.go:577] Response Headers:
	I0108 20:53:35.134596   35097 round_trippers.go:580]     Date: Mon, 08 Jan 2024 20:53:35 GMT
	I0108 20:53:35.134602   35097 round_trippers.go:580]     Audit-Id: 7f34c3ae-ef57-4db2-bda5-b928b966f352
	I0108 20:53:35.134607   35097 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 20:53:35.134612   35097 round_trippers.go:580]     Content-Type: application/json
	I0108 20:53:35.134617   35097 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3a93f880-9483-4ddc-a7ac-cf95288ef27f
	I0108 20:53:35.134623   35097 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb983115-4cad-4126-9b72-19cf8dafea14
	I0108 20:53:35.134932   35097 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1293"},"items":[{"metadata":{"name":"multinode-340815","uid":"d13844ec-1732-4e5f-9a57-9bd99e6704a7","resourceVersion":"942","creationTimestamp":"2024-01-08T20:38:02Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-340815","kubernetes.io/os":"linux","minikube.k8s.io/commit":"255792ad75c0218cbe9d2c7121633a1b1d442f28","minikube.k8s.io/name":"multinode-340815","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T20_38_06_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16238 chars]
	I0108 20:53:35.135715   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:53:35.135738   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:53:35.135750   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:53:35.135755   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:53:35.135761   35097 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 20:53:35.135768   35097 node_conditions.go:123] node cpu capacity is 2
	I0108 20:53:35.135781   35097 node_conditions.go:105] duration metric: took 185.716366ms to run NodePressure ...
	I0108 20:53:35.135795   35097 start.go:228] waiting for startup goroutines ...
	I0108 20:53:35.135822   35097 start.go:242] writing updated cluster config ...
	I0108 20:53:35.136196   35097 ssh_runner.go:195] Run: rm -f paused
	I0108 20:53:35.183047   35097 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 20:53:35.186708   35097 out.go:177] * Done! kubectl is now configured to use "multinode-340815" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 20:49:01 UTC, ends at Mon 2024-01-08 20:53:36 UTC. --
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.357353287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704747216357323978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5a8047d6-867c-4da0-89bf-b4d3736f1ed0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.358340277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8d2037f4-efdc-43f4-8b32-705bd0ca8af9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.358418835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8d2037f4-efdc-43f4-8b32-705bd0ca8af9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.358722977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b246dcdb8113d21d021a6edaf2160452327d6e5ebc4eb59da563d55e74c3da9,PodSandboxId:6e5a62d5b491bbd06c18bbc642d989bb408becc015d7f4ac51861239a60f8b23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704747008404281932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f2c5e2a8ac0cc2c3efd252134ee59d2dc84459be6001e5211dcf1801508da3,PodSandboxId:b0af13a389999b94937936aded1cd3efeeee6d19a8ed6d90210233f9dd386278,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704746995657175418,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-npzdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fdfd80ec-9054-4a2c-b7f6-a912162b80a6,},Annotations:map[string]string{io.kubernetes.container.hash: cca2d931,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d8c09e9c329495361ee2ce3b312c4a81f076136c1cca75f9c78bd1edaaef5a9,PodSandboxId:ef8402ba2f001a93b3ea01c3880636c17306535cef3dec3ca32b8126e19b83fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704746992563124823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h4v6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1ccbb8-1747-4b6f-b40c-c54670e49d54,},Annotations:map[string]string{io.kubernetes.container.hash: c7a8decd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6d878fab67076df05e2158b34f0fd7fab053a3e5009bd788aeae63a759967e,PodSandboxId:a484ba75a4dd26eb30ae733d799624c5426e1def2e9514edce02a9b6c402f3aa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704746979810404421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h48qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 65d532d3-b3ca-493d-b287-1b03dbdad538,},Annotations:map[string]string{io.kubernetes.container.hash: ac4d424e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90f496e03c7ed6019404e128a1b0d58d84711d2f3a2bcda9e93b788afd26b86,PodSandboxId:14945bada4381eaeea6d8c5304b2471fd306fecdebc5aeb3e04374b1c122de72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704746977652680713,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9xrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0843325-2adf-4c2f-8489-0675546
48b52,},Annotations:map[string]string{io.kubernetes.container.hash: 91a148c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c2279fcbad607c48c6263a9f995c6384c2920e6d95b902633c8ed88ea53aa6b,PodSandboxId:6e5a62d5b491bbd06c18bbc642d989bb408becc015d7f4ac51861239a60f8b23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704746977411411628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38
cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23db3f9a7a30628715a05ec3458eece28ed25d20585859ac7c44c303babd8cb,PodSandboxId:f68fb0742fd4589d5e54e1f0883a0deeb5f7b2d4eee0c340ce11b64cc582acb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704746970631869550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c87b92132627dab75791d3cff759e12,},Annotat
ions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37740cfbc09045a91e5a1b0792a4f112f74758ebc4461bcc9444b54db7e1985a,PodSandboxId:a63283c61de60633c091c9d36f4b18c8efe548bb577f82bdfeab17f28576df32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704746970561900011,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84677478c7d9bd76d7500f07832cd213,},Annotations:map[string]string{io.kubernetes.container.hash:
c58e30cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea1fdffd83872e08d67d51594e9bcc902b93c6b31db7f1233429afbcd278a5a,PodSandboxId:4577cb2422954ec38623fe3c7f5b4f201fdd3cf49b2cd746a86f09e8694e65bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704746970384827982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9f4acc9b0ffa502cc0493a6d857b92,},Annotations:map[string]string{io.kubernetes.container.hash: 22dbb42a,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c749c2c3dee4297ab3a8d02acd908687fd10670c601a446b793ad2dba13cbd,PodSandboxId:d6d8252568bac6752e68d4379e17195d947b9e4ebd9766822405384ea071ccf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704746970161946771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f741652d6560a2396658aaab123d801,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8d2037f4-efdc-43f4-8b32-705bd0ca8af9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.413794073Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2df37460-785c-4af2-b28b-1b53825e8ae7 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.413873432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2df37460-785c-4af2-b28b-1b53825e8ae7 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.415302397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8a3d6d00-462b-468b-b0fa-65f3a06e9e1b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.415761352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704747216415746430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8a3d6d00-462b-468b-b0fa-65f3a06e9e1b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.416228816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e7f3c245-6d66-4f2c-8726-923e93a0b586 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.416318530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e7f3c245-6d66-4f2c-8726-923e93a0b586 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.416651941Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b246dcdb8113d21d021a6edaf2160452327d6e5ebc4eb59da563d55e74c3da9,PodSandboxId:6e5a62d5b491bbd06c18bbc642d989bb408becc015d7f4ac51861239a60f8b23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704747008404281932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f2c5e2a8ac0cc2c3efd252134ee59d2dc84459be6001e5211dcf1801508da3,PodSandboxId:b0af13a389999b94937936aded1cd3efeeee6d19a8ed6d90210233f9dd386278,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704746995657175418,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-npzdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fdfd80ec-9054-4a2c-b7f6-a912162b80a6,},Annotations:map[string]string{io.kubernetes.container.hash: cca2d931,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d8c09e9c329495361ee2ce3b312c4a81f076136c1cca75f9c78bd1edaaef5a9,PodSandboxId:ef8402ba2f001a93b3ea01c3880636c17306535cef3dec3ca32b8126e19b83fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704746992563124823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h4v6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1ccbb8-1747-4b6f-b40c-c54670e49d54,},Annotations:map[string]string{io.kubernetes.container.hash: c7a8decd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6d878fab67076df05e2158b34f0fd7fab053a3e5009bd788aeae63a759967e,PodSandboxId:a484ba75a4dd26eb30ae733d799624c5426e1def2e9514edce02a9b6c402f3aa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704746979810404421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h48qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 65d532d3-b3ca-493d-b287-1b03dbdad538,},Annotations:map[string]string{io.kubernetes.container.hash: ac4d424e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90f496e03c7ed6019404e128a1b0d58d84711d2f3a2bcda9e93b788afd26b86,PodSandboxId:14945bada4381eaeea6d8c5304b2471fd306fecdebc5aeb3e04374b1c122de72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704746977652680713,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9xrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0843325-2adf-4c2f-8489-0675546
48b52,},Annotations:map[string]string{io.kubernetes.container.hash: 91a148c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c2279fcbad607c48c6263a9f995c6384c2920e6d95b902633c8ed88ea53aa6b,PodSandboxId:6e5a62d5b491bbd06c18bbc642d989bb408becc015d7f4ac51861239a60f8b23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704746977411411628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38
cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23db3f9a7a30628715a05ec3458eece28ed25d20585859ac7c44c303babd8cb,PodSandboxId:f68fb0742fd4589d5e54e1f0883a0deeb5f7b2d4eee0c340ce11b64cc582acb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704746970631869550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c87b92132627dab75791d3cff759e12,},Annotat
ions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37740cfbc09045a91e5a1b0792a4f112f74758ebc4461bcc9444b54db7e1985a,PodSandboxId:a63283c61de60633c091c9d36f4b18c8efe548bb577f82bdfeab17f28576df32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704746970561900011,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84677478c7d9bd76d7500f07832cd213,},Annotations:map[string]string{io.kubernetes.container.hash:
c58e30cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea1fdffd83872e08d67d51594e9bcc902b93c6b31db7f1233429afbcd278a5a,PodSandboxId:4577cb2422954ec38623fe3c7f5b4f201fdd3cf49b2cd746a86f09e8694e65bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704746970384827982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9f4acc9b0ffa502cc0493a6d857b92,},Annotations:map[string]string{io.kubernetes.container.hash: 22dbb42a,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c749c2c3dee4297ab3a8d02acd908687fd10670c601a446b793ad2dba13cbd,PodSandboxId:d6d8252568bac6752e68d4379e17195d947b9e4ebd9766822405384ea071ccf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704746970161946771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f741652d6560a2396658aaab123d801,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e7f3c245-6d66-4f2c-8726-923e93a0b586 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.461734571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ebe61c56-7869-4302-a538-a87394336245 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.461819165Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ebe61c56-7869-4302-a538-a87394336245 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.463282030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=66c4bc78-2681-4123-ae50-6b174895d979 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.463846197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704747216463830757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=66c4bc78-2681-4123-ae50-6b174895d979 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.464964633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=279d8265-d0cc-4cd8-b4e3-3db2c6ed08b0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.465041345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=279d8265-d0cc-4cd8-b4e3-3db2c6ed08b0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.465293151Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b246dcdb8113d21d021a6edaf2160452327d6e5ebc4eb59da563d55e74c3da9,PodSandboxId:6e5a62d5b491bbd06c18bbc642d989bb408becc015d7f4ac51861239a60f8b23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704747008404281932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f2c5e2a8ac0cc2c3efd252134ee59d2dc84459be6001e5211dcf1801508da3,PodSandboxId:b0af13a389999b94937936aded1cd3efeeee6d19a8ed6d90210233f9dd386278,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704746995657175418,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-npzdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fdfd80ec-9054-4a2c-b7f6-a912162b80a6,},Annotations:map[string]string{io.kubernetes.container.hash: cca2d931,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d8c09e9c329495361ee2ce3b312c4a81f076136c1cca75f9c78bd1edaaef5a9,PodSandboxId:ef8402ba2f001a93b3ea01c3880636c17306535cef3dec3ca32b8126e19b83fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704746992563124823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h4v6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1ccbb8-1747-4b6f-b40c-c54670e49d54,},Annotations:map[string]string{io.kubernetes.container.hash: c7a8decd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6d878fab67076df05e2158b34f0fd7fab053a3e5009bd788aeae63a759967e,PodSandboxId:a484ba75a4dd26eb30ae733d799624c5426e1def2e9514edce02a9b6c402f3aa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704746979810404421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h48qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 65d532d3-b3ca-493d-b287-1b03dbdad538,},Annotations:map[string]string{io.kubernetes.container.hash: ac4d424e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90f496e03c7ed6019404e128a1b0d58d84711d2f3a2bcda9e93b788afd26b86,PodSandboxId:14945bada4381eaeea6d8c5304b2471fd306fecdebc5aeb3e04374b1c122de72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704746977652680713,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9xrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0843325-2adf-4c2f-8489-0675546
48b52,},Annotations:map[string]string{io.kubernetes.container.hash: 91a148c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c2279fcbad607c48c6263a9f995c6384c2920e6d95b902633c8ed88ea53aa6b,PodSandboxId:6e5a62d5b491bbd06c18bbc642d989bb408becc015d7f4ac51861239a60f8b23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704746977411411628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38
cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23db3f9a7a30628715a05ec3458eece28ed25d20585859ac7c44c303babd8cb,PodSandboxId:f68fb0742fd4589d5e54e1f0883a0deeb5f7b2d4eee0c340ce11b64cc582acb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704746970631869550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c87b92132627dab75791d3cff759e12,},Annotat
ions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37740cfbc09045a91e5a1b0792a4f112f74758ebc4461bcc9444b54db7e1985a,PodSandboxId:a63283c61de60633c091c9d36f4b18c8efe548bb577f82bdfeab17f28576df32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704746970561900011,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84677478c7d9bd76d7500f07832cd213,},Annotations:map[string]string{io.kubernetes.container.hash:
c58e30cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea1fdffd83872e08d67d51594e9bcc902b93c6b31db7f1233429afbcd278a5a,PodSandboxId:4577cb2422954ec38623fe3c7f5b4f201fdd3cf49b2cd746a86f09e8694e65bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704746970384827982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9f4acc9b0ffa502cc0493a6d857b92,},Annotations:map[string]string{io.kubernetes.container.hash: 22dbb42a,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c749c2c3dee4297ab3a8d02acd908687fd10670c601a446b793ad2dba13cbd,PodSandboxId:d6d8252568bac6752e68d4379e17195d947b9e4ebd9766822405384ea071ccf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704746970161946771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f741652d6560a2396658aaab123d801,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=279d8265-d0cc-4cd8-b4e3-3db2c6ed08b0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.506739714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=935e7a06-06da-4151-be27-c98b942e1902 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.506806151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=935e7a06-06da-4151-be27-c98b942e1902 name=/runtime.v1.RuntimeService/Version
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.507806620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=819e6177-84c7-4a84-96ec-a7115e1a3c96 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.508153995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704747216508143198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=819e6177-84c7-4a84-96ec-a7115e1a3c96 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.509310208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ea02d09b-d239-4557-b127-184947cdfa0d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.509411294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ea02d09b-d239-4557-b127-184947cdfa0d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 20:53:36 multinode-340815 crio[716]: time="2024-01-08 20:53:36.509714392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b246dcdb8113d21d021a6edaf2160452327d6e5ebc4eb59da563d55e74c3da9,PodSandboxId:6e5a62d5b491bbd06c18bbc642d989bb408becc015d7f4ac51861239a60f8b23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704747008404281932,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32f2c5e2a8ac0cc2c3efd252134ee59d2dc84459be6001e5211dcf1801508da3,PodSandboxId:b0af13a389999b94937936aded1cd3efeeee6d19a8ed6d90210233f9dd386278,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1704746995657175418,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-npzdk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fdfd80ec-9054-4a2c-b7f6-a912162b80a6,},Annotations:map[string]string{io.kubernetes.container.hash: cca2d931,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d8c09e9c329495361ee2ce3b312c4a81f076136c1cca75f9c78bd1edaaef5a9,PodSandboxId:ef8402ba2f001a93b3ea01c3880636c17306535cef3dec3ca32b8126e19b83fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704746992563124823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-h4v6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c1ccbb8-1747-4b6f-b40c-c54670e49d54,},Annotations:map[string]string{io.kubernetes.container.hash: c7a8decd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6d878fab67076df05e2158b34f0fd7fab053a3e5009bd788aeae63a759967e,PodSandboxId:a484ba75a4dd26eb30ae733d799624c5426e1def2e9514edce02a9b6c402f3aa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1704746979810404421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h48qs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 65d532d3-b3ca-493d-b287-1b03dbdad538,},Annotations:map[string]string{io.kubernetes.container.hash: ac4d424e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90f496e03c7ed6019404e128a1b0d58d84711d2f3a2bcda9e93b788afd26b86,PodSandboxId:14945bada4381eaeea6d8c5304b2471fd306fecdebc5aeb3e04374b1c122de72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704746977652680713,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z9xrv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0843325-2adf-4c2f-8489-0675546
48b52,},Annotations:map[string]string{io.kubernetes.container.hash: 91a148c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c2279fcbad607c48c6263a9f995c6384c2920e6d95b902633c8ed88ea53aa6b,PodSandboxId:6e5a62d5b491bbd06c18bbc642d989bb408becc015d7f4ac51861239a60f8b23,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704746977411411628,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de357297-4bd9-4c71-ada5-ceace0d38
cfb,},Annotations:map[string]string{io.kubernetes.container.hash: c338046,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d23db3f9a7a30628715a05ec3458eece28ed25d20585859ac7c44c303babd8cb,PodSandboxId:f68fb0742fd4589d5e54e1f0883a0deeb5f7b2d4eee0c340ce11b64cc582acb4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704746970631869550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c87b92132627dab75791d3cff759e12,},Annotat
ions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37740cfbc09045a91e5a1b0792a4f112f74758ebc4461bcc9444b54db7e1985a,PodSandboxId:a63283c61de60633c091c9d36f4b18c8efe548bb577f82bdfeab17f28576df32,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704746970561900011,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84677478c7d9bd76d7500f07832cd213,},Annotations:map[string]string{io.kubernetes.container.hash:
c58e30cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea1fdffd83872e08d67d51594e9bcc902b93c6b31db7f1233429afbcd278a5a,PodSandboxId:4577cb2422954ec38623fe3c7f5b4f201fdd3cf49b2cd746a86f09e8694e65bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704746970384827982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a9f4acc9b0ffa502cc0493a6d857b92,},Annotations:map[string]string{io.kubernetes.container.hash: 22dbb42a,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55c749c2c3dee4297ab3a8d02acd908687fd10670c601a446b793ad2dba13cbd,PodSandboxId:d6d8252568bac6752e68d4379e17195d947b9e4ebd9766822405384ea071ccf3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704746970161946771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-340815,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f741652d6560a2396658aaab123d801,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ea02d09b-d239-4557-b127-184947cdfa0d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b246dcdb8113       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   6e5a62d5b491b       storage-provisioner
	32f2c5e2a8ac0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   b0af13a389999       busybox-5bc68d56bd-npzdk
	2d8c09e9c3294       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   ef8402ba2f001       coredns-5dd5756b68-h4v6v
	ec6d878fab670       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   a484ba75a4dd2       kindnet-h48qs
	c90f496e03c7e       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   14945bada4381       kube-proxy-z9xrv
	9c2279fcbad60       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   6e5a62d5b491b       storage-provisioner
	d23db3f9a7a30       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      4 minutes ago       Running             kube-scheduler            1                   f68fb0742fd45       kube-scheduler-multinode-340815
	37740cfbc0904       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      4 minutes ago       Running             etcd                      1                   a63283c61de60       etcd-multinode-340815
	fea1fdffd8387       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      4 minutes ago       Running             kube-apiserver            1                   4577cb2422954       kube-apiserver-multinode-340815
	55c749c2c3dee       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      4 minutes ago       Running             kube-controller-manager   1                   d6d8252568bac       kube-controller-manager-multinode-340815
	
	
	==> coredns [2d8c09e9c329495361ee2ce3b312c4a81f076136c1cca75f9c78bd1edaaef5a9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55795 - 54452 "HINFO IN 906662000269858403.8925488523472605801. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017350731s
	
	
	==> describe nodes <==
	Name:               multinode-340815
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-340815
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-340815
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T20_38_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:38:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-340815
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:53:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:50:05 +0000   Mon, 08 Jan 2024 20:37:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:50:05 +0000   Mon, 08 Jan 2024 20:37:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:50:05 +0000   Mon, 08 Jan 2024 20:37:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:50:05 +0000   Mon, 08 Jan 2024 20:49:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    multinode-340815
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 686b856db38c4ec1b793361572ee285f
	  System UUID:                686b856d-b38c-4ec1-b793-361572ee285f
	  Boot ID:                    1faa7e55-93d3-42a0-b64a-4fa9e095a58c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-npzdk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-h4v6v                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-multinode-340815                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-h48qs                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-multinode-340815             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-multinode-340815    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-z9xrv                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-multinode-340815             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 15m                  kube-proxy       
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  Starting                 15m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                  kubelet          Node multinode-340815 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                  kubelet          Node multinode-340815 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                  kubelet          Node multinode-340815 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                  node-controller  Node multinode-340815 event: Registered Node multinode-340815 in Controller
	  Normal  NodeReady                15m                  kubelet          Node multinode-340815 status is now: NodeReady
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node multinode-340815 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node multinode-340815 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node multinode-340815 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                node-controller  Node multinode-340815 event: Registered Node multinode-340815 in Controller
	
	
	Name:               multinode-340815-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-340815-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-340815
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T20_53_32_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:51:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-340815-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 20:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:51:41 +0000   Mon, 08 Jan 2024 20:51:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:51:41 +0000   Mon, 08 Jan 2024 20:51:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:51:41 +0000   Mon, 08 Jan 2024 20:51:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:51:41 +0000   Mon, 08 Jan 2024 20:51:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-340815-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6eff253b55e94982ab242ed793ce3707
	  System UUID:                6eff253b-55e9-4982-ab24-2ed793ce3707
	  Boot ID:                    aa127115-c411-412c-9353-fe16e6dae98a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-2l77z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-tqjx8               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-j5w6d            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From        Message
	  ----     ------                   ----                 ----        -------
	  Normal   Starting                 113s                 kube-proxy  
	  Normal   Starting                 13m                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)    kubelet     Node multinode-340815-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)    kubelet     Node multinode-340815-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)    kubelet     Node multinode-340815-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                13m                  kubelet     Node multinode-340815-m02 status is now: NodeReady
	  Normal   NodeNotReady             3m9s                 kubelet     Node multinode-340815-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m41s                kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       118s                 kubelet     Node multinode-340815-m02 status is now: NodeNotSchedulable
	  Normal   Starting                 115s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  115s (x2 over 115s)  kubelet     Node multinode-340815-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    115s (x2 over 115s)  kubelet     Node multinode-340815-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     115s (x2 over 115s)  kubelet     Node multinode-340815-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  115s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                115s                 kubelet     Node multinode-340815-m02 status is now: NodeReady
	
	
	Name:               multinode-340815-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-340815-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=multinode-340815
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T20_53_32_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 20:53:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-340815-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 20:53:31 +0000   Mon, 08 Jan 2024 20:53:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 20:53:31 +0000   Mon, 08 Jan 2024 20:53:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 20:53:31 +0000   Mon, 08 Jan 2024 20:53:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 20:53:31 +0000   Mon, 08 Jan 2024 20:53:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    multinode-340815-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 7df6deceb7c74a2ab493202edfb6de34
	  System UUID:                7df6dece-b7c7-4a2a-b493-202edfb6de34
	  Boot ID:                    080896c9-f3e5-4032-ad47-eb0690c3ad78
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-jqqkf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kindnet-wfgln               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-lxkrv            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-340815-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-340815-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-340815-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-340815-m03 status is now: NodeReady
	  Normal   Starting                 12m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)  kubelet     Node multinode-340815-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet     Node multinode-340815-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)  kubelet     Node multinode-340815-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-340815-m03 status is now: NodeReady
	  Normal   NodeNotReady             78s                kubelet     Node multinode-340815-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        60s                kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       7s                 kubelet     Node multinode-340815-m03 status is now: NodeNotSchedulable
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-340815-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-340815-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-340815-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-340815-m03 status is now: NodeReady
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	
	
	==> dmesg <==
	[Jan 8 20:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068812] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.432279] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jan 8 20:49] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136707] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.531139] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.756990] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.114326] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.140741] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.099698] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.227664] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +17.469533] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[ +18.847335] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [37740cfbc09045a91e5a1b0792a4f112f74758ebc4461bcc9444b54db7e1985a] <==
	{"level":"info","ts":"2024-01-08T20:49:32.362719Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:49:32.362747Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T20:49:32.362947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 switched to configuration voters=(11623670073473264757)"}
	{"level":"info","ts":"2024-01-08T20:49:32.363064Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","added-peer-id":"a14f9258d3b66c75","added-peer-peer-urls":["https://192.168.39.196:2380"]}
	{"level":"info","ts":"2024-01-08T20:49:32.363209Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:49:32.363285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T20:49:32.363309Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-01-08T20:49:32.363345Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-01-08T20:49:32.363282Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T20:49:32.364336Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a14f9258d3b66c75","initial-advertise-peer-urls":["https://192.168.39.196:2380"],"listen-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.196:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T20:49:32.364362Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T20:49:33.64663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-08T20:49:33.64667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-08T20:49:33.646703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgPreVoteResp from a14f9258d3b66c75 at term 2"}
	{"level":"info","ts":"2024-01-08T20:49:33.646716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became candidate at term 3"}
	{"level":"info","ts":"2024-01-08T20:49:33.646725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 received MsgVoteResp from a14f9258d3b66c75 at term 3"}
	{"level":"info","ts":"2024-01-08T20:49:33.646733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 became leader at term 3"}
	{"level":"info","ts":"2024-01-08T20:49:33.64674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a14f9258d3b66c75 elected leader a14f9258d3b66c75 at term 3"}
	{"level":"info","ts":"2024-01-08T20:49:33.649976Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a14f9258d3b66c75","local-member-attributes":"{Name:multinode-340815 ClientURLs:[https://192.168.39.196:2379]}","request-path":"/0/members/a14f9258d3b66c75/attributes","cluster-id":"8309c60c27e527a4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T20:49:33.650156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:49:33.650682Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T20:49:33.650919Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T20:49:33.651042Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T20:49:33.651387Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T20:49:33.652329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.196:2379"}
	
	
	==> kernel <==
	 20:53:36 up 4 min,  0 users,  load average: 0.07, 0.19, 0.09
	Linux multinode-340815 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [ec6d878fab67076df05e2158b34f0fd7fab053a3e5009bd788aeae63a759967e] <==
	I0108 20:52:51.443548       1 main.go:250] Node multinode-340815-m02 has CIDR [10.244.1.0/24] 
	I0108 20:52:51.443700       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I0108 20:52:51.443737       1 main.go:250] Node multinode-340815-m03 has CIDR [10.244.3.0/24] 
	I0108 20:53:01.453766       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:53:01.453893       1 main.go:227] handling current node
	I0108 20:53:01.453926       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0108 20:53:01.453944       1 main.go:250] Node multinode-340815-m02 has CIDR [10.244.1.0/24] 
	I0108 20:53:01.454158       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I0108 20:53:01.454200       1 main.go:250] Node multinode-340815-m03 has CIDR [10.244.3.0/24] 
	I0108 20:53:11.464889       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:53:11.464949       1 main.go:227] handling current node
	I0108 20:53:11.464972       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0108 20:53:11.464978       1 main.go:250] Node multinode-340815-m02 has CIDR [10.244.1.0/24] 
	I0108 20:53:11.465112       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I0108 20:53:11.465145       1 main.go:250] Node multinode-340815-m03 has CIDR [10.244.3.0/24] 
	I0108 20:53:21.470193       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:53:21.470248       1 main.go:227] handling current node
	I0108 20:53:21.470261       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0108 20:53:21.470267       1 main.go:250] Node multinode-340815-m02 has CIDR [10.244.1.0/24] 
	I0108 20:53:21.470367       1 main.go:223] Handling node with IPs: map[192.168.39.249:{}]
	I0108 20:53:21.470372       1 main.go:250] Node multinode-340815-m03 has CIDR [10.244.3.0/24] 
	I0108 20:53:31.479974       1 main.go:223] Handling node with IPs: map[192.168.39.196:{}]
	I0108 20:53:31.480044       1 main.go:227] handling current node
	I0108 20:53:31.480061       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0108 20:53:31.480068       1 main.go:250] Node multinode-340815-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [fea1fdffd83872e08d67d51594e9bcc902b93c6b31db7f1233429afbcd278a5a] <==
	I0108 20:49:35.096650       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0108 20:49:35.151779       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0108 20:49:35.151930       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0108 20:49:35.296036       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 20:49:35.305564       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 20:49:35.305872       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0108 20:49:35.305912       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0108 20:49:35.306414       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 20:49:35.307599       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 20:49:35.307757       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 20:49:35.316162       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0108 20:49:35.316428       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0108 20:49:35.316537       1 aggregator.go:166] initial CRD sync complete...
	I0108 20:49:35.316544       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 20:49:35.316549       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 20:49:35.316554       1 cache.go:39] Caches are synced for autoregister controller
	E0108 20:49:35.337863       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0108 20:49:36.100877       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 20:49:37.983089       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 20:49:38.146076       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 20:49:38.158950       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 20:49:38.319803       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 20:49:38.336933       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 20:49:47.612218       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 20:49:47.671215       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [55c749c2c3dee4297ab3a8d02acd908687fd10670c601a446b793ad2dba13cbd] <==
	I0108 20:51:41.757301       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-340815-m02\" does not exist"
	I0108 20:51:41.758758       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-95tbd" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-95tbd"
	I0108 20:51:41.790875       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-340815-m02" podCIDRs=["10.244.1.0/24"]
	I0108 20:51:41.895238       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-340815-m02"
	I0108 20:51:41.998243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.441635ms"
	I0108 20:51:41.998520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="114.528µs"
	I0108 20:51:42.640187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="79.983µs"
	I0108 20:51:56.039809       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="123.466µs"
	I0108 20:51:56.537571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="208.668µs"
	I0108 20:51:56.540923       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="125.333µs"
	I0108 20:52:18.591642       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-340815-m02"
	I0108 20:53:28.081583       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-2l77z"
	I0108 20:53:28.098970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="28.110255ms"
	I0108 20:53:28.111079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.036233ms"
	I0108 20:53:28.111214       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="69.318µs"
	I0108 20:53:28.136065       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.335µs"
	I0108 20:53:29.810339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.548595ms"
	I0108 20:53:29.811227       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="67.114µs"
	I0108 20:53:31.102258       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-340815-m02"
	I0108 20:53:31.779257       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-340815-m03\" does not exist"
	I0108 20:53:31.779538       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-340815-m02"
	I0108 20:53:31.779820       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-jqqkf" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-jqqkf"
	I0108 20:53:31.805695       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-340815-m03" podCIDRs=["10.244.2.0/24"]
	I0108 20:53:31.925391       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-340815-m02"
	I0108 20:53:32.680340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="112.943µs"
	
	
	==> kube-proxy [c90f496e03c7ed6019404e128a1b0d58d84711d2f3a2bcda9e93b788afd26b86] <==
	I0108 20:49:37.914953       1 server_others.go:69] "Using iptables proxy"
	I0108 20:49:37.934036       1 node.go:141] Successfully retrieved node IP: 192.168.39.196
	I0108 20:49:38.034904       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 20:49:38.034951       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 20:49:38.038176       1 server_others.go:152] "Using iptables Proxier"
	I0108 20:49:38.038234       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 20:49:38.038406       1 server.go:846] "Version info" version="v1.28.4"
	I0108 20:49:38.038416       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:49:38.039321       1 config.go:188] "Starting service config controller"
	I0108 20:49:38.039364       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 20:49:38.039386       1 config.go:97] "Starting endpoint slice config controller"
	I0108 20:49:38.039390       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 20:49:38.040032       1 config.go:315] "Starting node config controller"
	I0108 20:49:38.040067       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 20:49:38.139673       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 20:49:38.139783       1 shared_informer.go:318] Caches are synced for service config
	I0108 20:49:38.140099       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d23db3f9a7a30628715a05ec3458eece28ed25d20585859ac7c44c303babd8cb] <==
	I0108 20:49:32.650153       1 serving.go:348] Generated self-signed cert in-memory
	W0108 20:49:35.169965       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 20:49:35.170065       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 20:49:35.170094       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 20:49:35.170118       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 20:49:35.260378       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0108 20:49:35.260519       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 20:49:35.263896       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0108 20:49:35.263981       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 20:49:35.263994       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 20:49:35.264014       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 20:49:35.365263       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 20:49:01 UTC, ends at Mon 2024-01-08 20:53:37 UTC. --
	Jan 08 20:49:42 multinode-340815 kubelet[921]: E0108 20:49:42.174789     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-h4v6v" podUID="5c1ccbb8-1747-4b6f-b40c-c54670e49d54"
	Jan 08 20:49:43 multinode-340815 kubelet[921]: E0108 20:49:43.770990     921 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 08 20:49:43 multinode-340815 kubelet[921]: E0108 20:49:43.771098     921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c1ccbb8-1747-4b6f-b40c-c54670e49d54-config-volume podName:5c1ccbb8-1747-4b6f-b40c-c54670e49d54 nodeName:}" failed. No retries permitted until 2024-01-08 20:49:51.771083803 +0000 UTC m=+22.848279396 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5c1ccbb8-1747-4b6f-b40c-c54670e49d54-config-volume") pod "coredns-5dd5756b68-h4v6v" (UID: "5c1ccbb8-1747-4b6f-b40c-c54670e49d54") : object "kube-system"/"coredns" not registered
	Jan 08 20:49:43 multinode-340815 kubelet[921]: E0108 20:49:43.871324     921 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 08 20:49:43 multinode-340815 kubelet[921]: E0108 20:49:43.871357     921 projected.go:198] Error preparing data for projected volume kube-api-access-f5nf5 for pod default/busybox-5bc68d56bd-npzdk: object "default"/"kube-root-ca.crt" not registered
	Jan 08 20:49:43 multinode-340815 kubelet[921]: E0108 20:49:43.871492     921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdfd80ec-9054-4a2c-b7f6-a912162b80a6-kube-api-access-f5nf5 podName:fdfd80ec-9054-4a2c-b7f6-a912162b80a6 nodeName:}" failed. No retries permitted until 2024-01-08 20:49:51.871418952 +0000 UTC m=+22.948614545 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-f5nf5" (UniqueName: "kubernetes.io/projected/fdfd80ec-9054-4a2c-b7f6-a912162b80a6-kube-api-access-f5nf5") pod "busybox-5bc68d56bd-npzdk" (UID: "fdfd80ec-9054-4a2c-b7f6-a912162b80a6") : object "default"/"kube-root-ca.crt" not registered
	Jan 08 20:49:44 multinode-340815 kubelet[921]: E0108 20:49:44.174406     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-h4v6v" podUID="5c1ccbb8-1747-4b6f-b40c-c54670e49d54"
	Jan 08 20:49:44 multinode-340815 kubelet[921]: E0108 20:49:44.175126     921 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-npzdk" podUID="fdfd80ec-9054-4a2c-b7f6-a912162b80a6"
	Jan 08 20:50:08 multinode-340815 kubelet[921]: I0108 20:50:08.375875     921 scope.go:117] "RemoveContainer" containerID="9c2279fcbad607c48c6263a9f995c6384c2920e6d95b902633c8ed88ea53aa6b"
	Jan 08 20:50:29 multinode-340815 kubelet[921]: E0108 20:50:29.193933     921 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 20:50:29 multinode-340815 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 20:50:29 multinode-340815 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 20:50:29 multinode-340815 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 20:51:29 multinode-340815 kubelet[921]: E0108 20:51:29.197828     921 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 20:51:29 multinode-340815 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 20:51:29 multinode-340815 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 20:51:29 multinode-340815 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 20:52:29 multinode-340815 kubelet[921]: E0108 20:52:29.192018     921 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 20:52:29 multinode-340815 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 20:52:29 multinode-340815 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 20:52:29 multinode-340815 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 20:53:29 multinode-340815 kubelet[921]: E0108 20:53:29.201352     921 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 20:53:29 multinode-340815 kubelet[921]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 20:53:29 multinode-340815 kubelet[921]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 20:53:29 multinode-340815 kubelet[921]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-340815 -n multinode-340815
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-340815 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (708.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 stop
E0108 20:54:26.820228   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:55:36.429887   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-340815 stop: exit status 82 (2m1.436130495s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-340815"  ...
	* Stopping node "multinode-340815"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-340815 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-340815 status: exit status 3 (18.760881329s)

                                                
                                                
-- stdout --
	multinode-340815
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-340815-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:55:59.804418   37403 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.196:22: connect: no route to host
	E0108 20:55:59.804462   37403 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.196:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-340815 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-340815 -n multinode-340815
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-340815 -n multinode-340815: exit status 3 (3.190879425s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 20:56:03.164404   37486 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.196:22: connect: no route to host
	E0108 20:56:03.164436   37486 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.196:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-340815" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.39s)

                                                
                                    
x
+
TestPreload (336.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-609011 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0108 21:04:26.820401   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 21:05:36.429216   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 21:06:04.516452   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-609011 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m11.751265897s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-609011 image pull gcr.io/k8s-minikube/busybox
E0108 21:07:29.869404   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-609011 image pull gcr.io/k8s-minikube/busybox: (2.924188074s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-609011
E0108 21:09:26.820513   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-609011: exit status 82 (2m1.666928276s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-609011"  ...
	* Stopping node "test-preload-609011"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-609011 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2024-01-08 21:09:34.184359427 +0000 UTC m=+3598.109250778
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-609011 -n test-preload-609011
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-609011 -n test-preload-609011: exit status 3 (18.648318263s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:09:52.828477   40576 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.49:22: connect: no route to host
	E0108 21:09:52.828496   40576 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.49:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-609011" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-609011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-609011
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-609011: (1.1015894s)
--- FAIL: TestPreload (336.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (194.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1401772272.exe start -p running-upgrade-631345 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.1401772272.exe start -p running-upgrade-631345 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m29.95488758s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-631345 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0108 21:14:26.819967   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-631345 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (40.389598477s)

                                                
                                                
-- stdout --
	* [running-upgrade-631345] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-631345 in cluster running-upgrade-631345
	* Updating the running kvm2 "running-upgrade-631345" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:14:24.073627   45641 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:14:24.073898   45641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:14:24.073909   45641 out.go:309] Setting ErrFile to fd 2...
	I0108 21:14:24.073914   45641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:14:24.074089   45641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:14:24.074636   45641 out.go:303] Setting JSON to false
	I0108 21:14:24.075586   45641 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6988,"bootTime":1704741476,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:14:24.075662   45641 start.go:138] virtualization: kvm guest
	I0108 21:14:24.078213   45641 out.go:177] * [running-upgrade-631345] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:14:24.079772   45641 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:14:24.079797   45641 notify.go:220] Checking for updates...
	I0108 21:14:24.081281   45641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:14:24.082779   45641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:14:24.084553   45641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:14:24.086323   45641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:14:24.087765   45641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:14:24.089509   45641 config.go:182] Loaded profile config "running-upgrade-631345": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 21:14:24.089567   45641 start_flags.go:694] config upgrade: Driver=kvm2
	I0108 21:14:24.089584   45641 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 21:14:24.089678   45641 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/running-upgrade-631345/config.json ...
	I0108 21:14:24.090436   45641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:14:24.090511   45641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:14:24.105938   45641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37419
	I0108 21:14:24.106472   45641 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:14:24.107157   45641 main.go:141] libmachine: Using API Version  1
	I0108 21:14:24.107176   45641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:14:24.107505   45641 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:14:24.107728   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .DriverName
	I0108 21:14:24.110239   45641 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 21:14:24.112344   45641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:14:24.112694   45641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:14:24.112746   45641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:14:24.129999   45641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I0108 21:14:24.130422   45641 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:14:24.130940   45641 main.go:141] libmachine: Using API Version  1
	I0108 21:14:24.130967   45641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:14:24.131309   45641 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:14:24.131522   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .DriverName
	I0108 21:14:24.167554   45641 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 21:14:24.168861   45641 start.go:298] selected driver: kvm2
	I0108 21:14:24.168879   45641 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-631345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.7 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 21:14:24.168994   45641 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:14:24.169656   45641 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:14:24.169732   45641 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:14:24.184786   45641 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:14:24.185162   45641 cni.go:84] Creating CNI manager for ""
	I0108 21:14:24.185181   45641 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 21:14:24.185192   45641 start_flags.go:323] config:
	{Name:running-upgrade-631345 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.7 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 21:14:24.185374   45641 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:14:24.187505   45641 out.go:177] * Starting control plane node running-upgrade-631345 in cluster running-upgrade-631345
	I0108 21:14:24.189158   45641 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0108 21:14:24.654944   45641 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 21:14:24.655152   45641 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/running-upgrade-631345/config.json ...
	I0108 21:14:24.655202   45641 cache.go:107] acquiring lock: {Name:mk404ee59d151f42edf5b0bb65897bb384427ec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:14:24.655288   45641 cache.go:107] acquiring lock: {Name:mk1e6c735aae94af16a2e2bf6ff299b004c771f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:14:24.655320   45641 cache.go:107] acquiring lock: {Name:mk74c6e324c6e41d154535f7e724b46548b36d70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:14:24.655315   45641 cache.go:107] acquiring lock: {Name:mk1bac41a2910c6e144ea55b3470102402a1bfda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:14:24.655394   45641 cache.go:107] acquiring lock: {Name:mk6ffccac4c858f5ee7d8c1ef59b5ce6772c4de9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:14:24.655418   45641 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0108 21:14:24.655459   45641 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0108 21:14:24.655482   45641 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0108 21:14:24.655509   45641 start.go:365] acquiring machines lock for running-upgrade-631345: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:14:24.655262   45641 cache.go:107] acquiring lock: {Name:mk65389ddcd499e05451b4ba07b5887fde683f25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:14:24.655585   45641 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0108 21:14:24.655605   45641 cache.go:107] acquiring lock: {Name:mkeb3e7e4793a65991e84bd10e24abf147a4d51a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:14:24.655637   45641 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0108 21:14:24.655701   45641 cache.go:115] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 21:14:24.655727   45641 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 532.775µs
	I0108 21:14:24.655749   45641 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 21:14:24.655710   45641 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0108 21:14:24.655798   45641 cache.go:107] acquiring lock: {Name:mka841fe0ca90530e95adda70e575bf96a6fa659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:14:24.655931   45641 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0108 21:14:24.656760   45641 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0108 21:14:24.656891   45641 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0108 21:14:24.656898   45641 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0108 21:14:24.656893   45641 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0108 21:14:24.656896   45641 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0108 21:14:24.656954   45641 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0108 21:14:24.656965   45641 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0108 21:14:24.799668   45641 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0108 21:14:24.809836   45641 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0108 21:14:24.833509   45641 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0108 21:14:24.833826   45641 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0108 21:14:24.851708   45641 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0108 21:14:24.864198   45641 cache.go:157] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0108 21:14:24.864229   45641 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 208.908781ms
	I0108 21:14:24.864245   45641 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0108 21:14:24.868550   45641 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0108 21:14:24.888083   45641 cache.go:162] opening:  /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0108 21:14:25.392366   45641 cache.go:157] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0108 21:14:25.392393   45641 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 736.99983ms
	I0108 21:14:25.392409   45641 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0108 21:14:25.751622   45641 cache.go:157] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0108 21:14:25.751653   45641 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.09636961s
	I0108 21:14:25.751669   45641 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0108 21:14:25.906606   45641 cache.go:157] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0108 21:14:25.906682   45641 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.250923542s
	I0108 21:14:25.906712   45641 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0108 21:14:25.976406   45641 cache.go:157] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0108 21:14:25.976437   45641 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.320846114s
	I0108 21:14:25.976449   45641 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0108 21:14:26.393959   45641 cache.go:157] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0108 21:14:26.393994   45641 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.738743898s
	I0108 21:14:26.394006   45641 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0108 21:14:26.498067   45641 cache.go:157] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 21:14:26.498104   45641 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.842815785s
	I0108 21:14:26.498121   45641 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 21:14:26.498143   45641 cache.go:87] Successfully saved all images to host disk.
	I0108 21:15:01.101939   45641 start.go:369] acquired machines lock for "running-upgrade-631345" in 36.446383161s
	I0108 21:15:01.102008   45641 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:15:01.102049   45641 fix.go:54] fixHost starting: minikube
	I0108 21:15:01.102481   45641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:15:01.102532   45641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:15:01.121476   45641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34251
	I0108 21:15:01.122036   45641 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:15:01.122750   45641 main.go:141] libmachine: Using API Version  1
	I0108 21:15:01.122779   45641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:15:01.123195   45641 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:15:01.123442   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .DriverName
	I0108 21:15:01.123602   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetState
	I0108 21:15:01.125589   45641 fix.go:102] recreateIfNeeded on running-upgrade-631345: state=Running err=<nil>
	W0108 21:15:01.125621   45641 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:15:01.127701   45641 out.go:177] * Updating the running kvm2 "running-upgrade-631345" VM ...
	I0108 21:15:01.129072   45641 machine.go:88] provisioning docker machine ...
	I0108 21:15:01.129103   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .DriverName
	I0108 21:15:01.129412   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetMachineName
	I0108 21:15:01.129596   45641 buildroot.go:166] provisioning hostname "running-upgrade-631345"
	I0108 21:15:01.129621   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetMachineName
	I0108 21:15:01.129783   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHHostname
	I0108 21:15:01.132629   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.133068   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:fa:5f", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:12:33 +0000 UTC Type:0 Mac:52:54:00:ad:fa:5f Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:running-upgrade-631345 Clientid:01:52:54:00:ad:fa:5f}
	I0108 21:15:01.133107   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined IP address 192.168.50.7 and MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.133321   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHPort
	I0108 21:15:01.133525   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:01.133680   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:01.133795   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHUsername
	I0108 21:15:01.133977   45641 main.go:141] libmachine: Using SSH client type: native
	I0108 21:15:01.134448   45641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0108 21:15:01.134468   45641 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-631345 && echo "running-upgrade-631345" | sudo tee /etc/hostname
	I0108 21:15:01.260310   45641 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-631345
	
	I0108 21:15:01.260344   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHHostname
	I0108 21:15:01.263551   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.263965   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:fa:5f", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:12:33 +0000 UTC Type:0 Mac:52:54:00:ad:fa:5f Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:running-upgrade-631345 Clientid:01:52:54:00:ad:fa:5f}
	I0108 21:15:01.264015   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined IP address 192.168.50.7 and MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.264174   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHPort
	I0108 21:15:01.264379   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:01.264550   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:01.264800   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHUsername
	I0108 21:15:01.264991   45641 main.go:141] libmachine: Using SSH client type: native
	I0108 21:15:01.265430   45641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0108 21:15:01.265456   45641 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-631345' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-631345/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-631345' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:15:01.391271   45641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:15:01.391363   45641 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 21:15:01.391398   45641 buildroot.go:174] setting up certificates
	I0108 21:15:01.391411   45641 provision.go:83] configureAuth start
	I0108 21:15:01.391427   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetMachineName
	I0108 21:15:01.391849   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetIP
	I0108 21:15:01.396542   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.397068   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:fa:5f", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:12:33 +0000 UTC Type:0 Mac:52:54:00:ad:fa:5f Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:running-upgrade-631345 Clientid:01:52:54:00:ad:fa:5f}
	I0108 21:15:01.397103   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined IP address 192.168.50.7 and MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.397425   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHHostname
	I0108 21:15:01.401234   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.401682   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:fa:5f", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:12:33 +0000 UTC Type:0 Mac:52:54:00:ad:fa:5f Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:running-upgrade-631345 Clientid:01:52:54:00:ad:fa:5f}
	I0108 21:15:01.401738   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined IP address 192.168.50.7 and MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.401916   45641 provision.go:138] copyHostCerts
	I0108 21:15:01.402033   45641 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 21:15:01.402050   45641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 21:15:01.402127   45641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 21:15:01.402259   45641 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 21:15:01.402273   45641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 21:15:01.402307   45641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 21:15:01.402386   45641 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 21:15:01.402397   45641 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 21:15:01.402436   45641 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 21:15:01.402503   45641 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-631345 san=[192.168.50.7 192.168.50.7 localhost 127.0.0.1 minikube running-upgrade-631345]
	I0108 21:15:01.462990   45641 provision.go:172] copyRemoteCerts
	I0108 21:15:01.463052   45641 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:15:01.463074   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHHostname
	I0108 21:15:01.466654   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.467316   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:fa:5f", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:12:33 +0000 UTC Type:0 Mac:52:54:00:ad:fa:5f Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:running-upgrade-631345 Clientid:01:52:54:00:ad:fa:5f}
	I0108 21:15:01.467358   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined IP address 192.168.50.7 and MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.467581   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHPort
	I0108 21:15:01.467789   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:01.467992   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHUsername
	I0108 21:15:01.468176   45641 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/running-upgrade-631345/id_rsa Username:docker}
	I0108 21:15:01.563998   45641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 21:15:01.584763   45641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 21:15:01.608428   45641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:15:01.629696   45641 provision.go:86] duration metric: configureAuth took 238.270926ms
	I0108 21:15:01.629732   45641 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:15:01.629958   45641 config.go:182] Loaded profile config "running-upgrade-631345": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 21:15:01.630052   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHHostname
	I0108 21:15:01.633383   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.633912   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:fa:5f", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:12:33 +0000 UTC Type:0 Mac:52:54:00:ad:fa:5f Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:running-upgrade-631345 Clientid:01:52:54:00:ad:fa:5f}
	I0108 21:15:01.634011   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined IP address 192.168.50.7 and MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:01.634259   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHPort
	I0108 21:15:01.634468   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:01.634678   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:01.635034   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHUsername
	I0108 21:15:01.635309   45641 main.go:141] libmachine: Using SSH client type: native
	I0108 21:15:01.635657   45641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0108 21:15:01.635688   45641 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:15:02.252022   45641 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:15:02.252057   45641 machine.go:91] provisioned docker machine in 1.122966247s
	I0108 21:15:02.252071   45641 start.go:300] post-start starting for "running-upgrade-631345" (driver="kvm2")
	I0108 21:15:02.252085   45641 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:15:02.252132   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .DriverName
	I0108 21:15:02.252436   45641 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:15:02.252467   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHHostname
	I0108 21:15:02.255198   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:02.255541   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:fa:5f", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:12:33 +0000 UTC Type:0 Mac:52:54:00:ad:fa:5f Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:running-upgrade-631345 Clientid:01:52:54:00:ad:fa:5f}
	I0108 21:15:02.255561   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined IP address 192.168.50.7 and MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:02.255734   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHPort
	I0108 21:15:02.255923   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:02.256070   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHUsername
	I0108 21:15:02.256235   45641 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/running-upgrade-631345/id_rsa Username:docker}
	I0108 21:15:02.341005   45641 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:15:02.346084   45641 info.go:137] Remote host: Buildroot 2019.02.7
	I0108 21:15:02.346116   45641 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 21:15:02.346183   45641 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 21:15:02.346266   45641 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 21:15:02.346400   45641 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:15:02.353652   45641 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 21:15:02.372295   45641 start.go:303] post-start completed in 120.208377ms
	I0108 21:15:02.372321   45641 fix.go:56] fixHost completed within 1.270304862s
	I0108 21:15:02.372365   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHHostname
	I0108 21:15:02.376253   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:02.376695   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:fa:5f", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:12:33 +0000 UTC Type:0 Mac:52:54:00:ad:fa:5f Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:running-upgrade-631345 Clientid:01:52:54:00:ad:fa:5f}
	I0108 21:15:02.376728   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined IP address 192.168.50.7 and MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:02.376969   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHPort
	I0108 21:15:02.377218   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:02.377386   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:02.377565   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHUsername
	I0108 21:15:02.377740   45641 main.go:141] libmachine: Using SSH client type: native
	I0108 21:15:02.378204   45641 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0108 21:15:02.378222   45641 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0108 21:15:02.497883   45641 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704748502.495321016
	
	I0108 21:15:02.497966   45641 fix.go:206] guest clock: 1704748502.495321016
	I0108 21:15:02.497981   45641 fix.go:219] Guest: 2024-01-08 21:15:02.495321016 +0000 UTC Remote: 2024-01-08 21:15:02.372342399 +0000 UTC m=+38.349853634 (delta=122.978617ms)
	I0108 21:15:02.498024   45641 fix.go:190] guest clock delta is within tolerance: 122.978617ms
	I0108 21:15:02.498031   45641 start.go:83] releasing machines lock for "running-upgrade-631345", held for 1.396047417s
	I0108 21:15:02.498060   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .DriverName
	I0108 21:15:02.498344   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetIP
	I0108 21:15:02.501532   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:02.501913   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:fa:5f", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:12:33 +0000 UTC Type:0 Mac:52:54:00:ad:fa:5f Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:running-upgrade-631345 Clientid:01:52:54:00:ad:fa:5f}
	I0108 21:15:02.501938   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined IP address 192.168.50.7 and MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:02.502121   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .DriverName
	I0108 21:15:02.502714   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .DriverName
	I0108 21:15:02.502873   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .DriverName
	I0108 21:15:02.502997   45641 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:15:02.503042   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHHostname
	I0108 21:15:02.503101   45641 ssh_runner.go:195] Run: cat /version.json
	I0108 21:15:02.503144   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHHostname
	I0108 21:15:02.506028   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:02.506262   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:02.506404   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:fa:5f", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:12:33 +0000 UTC Type:0 Mac:52:54:00:ad:fa:5f Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:running-upgrade-631345 Clientid:01:52:54:00:ad:fa:5f}
	I0108 21:15:02.506429   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined IP address 192.168.50.7 and MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:02.506608   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHPort
	I0108 21:15:02.506623   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:fa:5f", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2024-01-08 22:12:33 +0000 UTC Type:0 Mac:52:54:00:ad:fa:5f Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:running-upgrade-631345 Clientid:01:52:54:00:ad:fa:5f}
	I0108 21:15:02.506647   45641 main.go:141] libmachine: (running-upgrade-631345) DBG | domain running-upgrade-631345 has defined IP address 192.168.50.7 and MAC address 52:54:00:ad:fa:5f in network minikube-net
	I0108 21:15:02.506811   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:02.506966   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHPort
	I0108 21:15:02.507072   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHUsername
	I0108 21:15:02.507138   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHKeyPath
	I0108 21:15:02.507203   45641 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/running-upgrade-631345/id_rsa Username:docker}
	I0108 21:15:02.507274   45641 main.go:141] libmachine: (running-upgrade-631345) Calling .GetSSHUsername
	I0108 21:15:02.507453   45641 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/running-upgrade-631345/id_rsa Username:docker}
	W0108 21:15:02.624798   45641 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 21:15:02.624874   45641 ssh_runner.go:195] Run: systemctl --version
	I0108 21:15:02.632302   45641 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:15:02.749194   45641 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 21:15:02.755433   45641 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:15:02.755492   45641 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:15:02.761873   45641 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 21:15:02.761897   45641 start.go:475] detecting cgroup driver to use...
	I0108 21:15:02.761984   45641 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:15:02.776677   45641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:15:02.790007   45641 docker.go:217] disabling cri-docker service (if available) ...
	I0108 21:15:02.790074   45641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:15:02.801397   45641 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:15:02.817210   45641 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 21:15:02.827754   45641 docker.go:227] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 21:15:02.827849   45641 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:15:02.979876   45641 docker.go:233] disabling docker service ...
	I0108 21:15:02.980067   45641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:15:04.008767   45641 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.028633753s)
	I0108 21:15:04.008835   45641 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:15:04.021415   45641 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:15:04.156754   45641 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:15:04.352696   45641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:15:04.366731   45641 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:15:04.383955   45641 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0108 21:15:04.384039   45641 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:15:04.396493   45641 out.go:177] 
	W0108 21:15:04.398266   45641 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 21:15:04.398289   45641 out.go:239] * 
	* 
	W0108 21:15:04.399550   45641 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:15:04.401167   45641 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-631345 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2024-01-08 21:15:04.424439002 +0000 UTC m=+3928.349330389
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-631345 -n running-upgrade-631345
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-631345 -n running-upgrade-631345: exit status 4 (312.184297ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:15:04.690463   46114 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-631345" does not appear in /home/jenkins/minikube-integration/17907-10702/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-631345" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-631345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-631345
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-631345: (1.587964456s)
--- FAIL: TestRunningBinaryUpgrade (194.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (141.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-879273 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-879273 --alsologtostderr -v=3: exit status 82 (2m2.609193153s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-879273"  ...
	* Stopping node "old-k8s-version-879273"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:14:43.073153   45911 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:14:43.073327   45911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:14:43.073351   45911 out.go:309] Setting ErrFile to fd 2...
	I0108 21:14:43.073367   45911 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:14:43.073734   45911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:14:43.074098   45911 out.go:303] Setting JSON to false
	I0108 21:14:43.074246   45911 mustload.go:65] Loading cluster: old-k8s-version-879273
	I0108 21:14:43.074776   45911 config.go:182] Loaded profile config "old-k8s-version-879273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 21:14:43.074890   45911 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/config.json ...
	I0108 21:14:43.075137   45911 mustload.go:65] Loading cluster: old-k8s-version-879273
	I0108 21:14:43.075323   45911 config.go:182] Loaded profile config "old-k8s-version-879273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 21:14:43.075372   45911 stop.go:39] StopHost: old-k8s-version-879273
	I0108 21:14:43.075933   45911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:14:43.076015   45911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:14:43.092172   45911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I0108 21:14:43.092803   45911 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:14:43.093458   45911 main.go:141] libmachine: Using API Version  1
	I0108 21:14:43.093486   45911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:14:43.093838   45911 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:14:43.096653   45911 out.go:177] * Stopping node "old-k8s-version-879273"  ...
	I0108 21:14:43.098196   45911 main.go:141] libmachine: Stopping "old-k8s-version-879273"...
	I0108 21:14:43.098222   45911 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetState
	I0108 21:14:43.100353   45911 main.go:141] libmachine: (old-k8s-version-879273) Calling .Stop
	I0108 21:14:43.106957   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 0/60
	I0108 21:14:44.107274   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 1/60
	I0108 21:14:45.109710   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 2/60
	I0108 21:14:46.111432   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 3/60
	I0108 21:14:47.112991   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 4/60
	I0108 21:14:48.116057   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 5/60
	I0108 21:14:49.117646   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 6/60
	I0108 21:14:50.119049   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 7/60
	I0108 21:14:51.120973   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 8/60
	I0108 21:14:52.122741   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 9/60
	I0108 21:14:53.124341   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 10/60
	I0108 21:14:54.126051   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 11/60
	I0108 21:14:55.128490   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 12/60
	I0108 21:14:56.129937   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 13/60
	I0108 21:14:57.131539   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 14/60
	I0108 21:14:58.133703   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 15/60
	I0108 21:14:59.135117   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 16/60
	I0108 21:15:00.137122   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 17/60
	I0108 21:15:01.138651   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 18/60
	I0108 21:15:02.141087   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 19/60
	I0108 21:15:03.142782   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 20/60
	I0108 21:15:04.512224   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 21/60
	I0108 21:15:05.810363   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 22/60
	I0108 21:15:06.952435   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 23/60
	I0108 21:15:07.953919   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 24/60
	I0108 21:15:08.956583   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 25/60
	I0108 21:15:09.958502   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 26/60
	I0108 21:15:10.960529   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 27/60
	I0108 21:15:11.962229   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 28/60
	I0108 21:15:12.964265   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 29/60
	I0108 21:15:13.966920   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 30/60
	I0108 21:15:14.968679   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 31/60
	I0108 21:15:15.969973   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 32/60
	I0108 21:15:16.972140   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 33/60
	I0108 21:15:17.974094   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 34/60
	I0108 21:15:18.975728   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 35/60
	I0108 21:15:19.977661   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 36/60
	I0108 21:15:20.979930   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 37/60
	I0108 21:15:21.981611   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 38/60
	I0108 21:15:22.983125   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 39/60
	I0108 21:15:23.984434   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 40/60
	I0108 21:15:24.986765   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 41/60
	I0108 21:15:25.988345   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 42/60
	I0108 21:15:26.989984   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 43/60
	I0108 21:15:27.991693   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 44/60
	I0108 21:15:28.993854   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 45/60
	I0108 21:15:29.995238   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 46/60
	I0108 21:15:30.996842   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 47/60
	I0108 21:15:31.998735   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 48/60
	I0108 21:15:33.000492   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 49/60
	I0108 21:15:34.002118   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 50/60
	I0108 21:15:35.003972   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 51/60
	I0108 21:15:36.005721   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 52/60
	I0108 21:15:37.007232   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 53/60
	I0108 21:15:38.008797   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 54/60
	I0108 21:15:39.011192   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 55/60
	I0108 21:15:40.013449   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 56/60
	I0108 21:15:41.015089   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 57/60
	I0108 21:15:42.018020   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 58/60
	I0108 21:15:43.019618   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 59/60
	I0108 21:15:44.020785   45911 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 21:15:44.020858   45911 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:15:44.020881   45911 retry.go:31] will retry after 1.426716398s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:15:45.448228   45911 stop.go:39] StopHost: old-k8s-version-879273
	I0108 21:15:45.448756   45911 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:15:45.448812   45911 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:15:45.467944   45911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I0108 21:15:45.468434   45911 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:15:45.468953   45911 main.go:141] libmachine: Using API Version  1
	I0108 21:15:45.468994   45911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:15:45.469374   45911 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:15:45.472214   45911 out.go:177] * Stopping node "old-k8s-version-879273"  ...
	I0108 21:15:45.474302   45911 main.go:141] libmachine: Stopping "old-k8s-version-879273"...
	I0108 21:15:45.474326   45911 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetState
	I0108 21:15:45.476537   45911 main.go:141] libmachine: (old-k8s-version-879273) Calling .Stop
	I0108 21:15:45.480617   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 0/60
	I0108 21:15:46.482784   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 1/60
	I0108 21:15:47.484375   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 2/60
	I0108 21:15:48.486717   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 3/60
	I0108 21:15:49.488441   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 4/60
	I0108 21:15:50.490699   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 5/60
	I0108 21:15:51.492432   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 6/60
	I0108 21:15:52.494679   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 7/60
	I0108 21:15:53.496115   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 8/60
	I0108 21:15:54.498128   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 9/60
	I0108 21:15:55.500590   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 10/60
	I0108 21:15:56.502416   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 11/60
	I0108 21:15:57.503905   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 12/60
	I0108 21:15:58.505658   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 13/60
	I0108 21:15:59.507087   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 14/60
	I0108 21:16:00.509277   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 15/60
	I0108 21:16:01.511114   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 16/60
	I0108 21:16:02.513663   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 17/60
	I0108 21:16:03.515531   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 18/60
	I0108 21:16:04.517924   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 19/60
	I0108 21:16:05.519740   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 20/60
	I0108 21:16:06.521514   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 21/60
	I0108 21:16:07.523321   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 22/60
	I0108 21:16:08.525089   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 23/60
	I0108 21:16:09.526860   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 24/60
	I0108 21:16:10.529390   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 25/60
	I0108 21:16:11.531373   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 26/60
	I0108 21:16:12.533128   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 27/60
	I0108 21:16:13.534690   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 28/60
	I0108 21:16:14.537084   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 29/60
	I0108 21:16:15.538649   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 30/60
	I0108 21:16:16.540445   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 31/60
	I0108 21:16:17.542150   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 32/60
	I0108 21:16:18.543463   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 33/60
	I0108 21:16:19.544982   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 34/60
	I0108 21:16:20.546702   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 35/60
	I0108 21:16:21.548216   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 36/60
	I0108 21:16:22.550867   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 37/60
	I0108 21:16:23.552365   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 38/60
	I0108 21:16:24.553742   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 39/60
	I0108 21:16:25.556325   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 40/60
	I0108 21:16:26.558660   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 41/60
	I0108 21:16:27.561209   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 42/60
	I0108 21:16:28.563773   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 43/60
	I0108 21:16:29.565458   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 44/60
	I0108 21:16:30.567407   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 45/60
	I0108 21:16:31.569240   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 46/60
	I0108 21:16:32.570817   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 47/60
	I0108 21:16:33.572441   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 48/60
	I0108 21:16:34.574788   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 49/60
	I0108 21:16:35.577269   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 50/60
	I0108 21:16:36.578973   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 51/60
	I0108 21:16:37.581580   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 52/60
	I0108 21:16:38.583682   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 53/60
	I0108 21:16:39.585670   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 54/60
	I0108 21:16:40.587233   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 55/60
	I0108 21:16:41.589042   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 56/60
	I0108 21:16:42.591006   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 57/60
	I0108 21:16:43.593724   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 58/60
	I0108 21:16:44.595288   45911 main.go:141] libmachine: (old-k8s-version-879273) Waiting for machine to stop 59/60
	I0108 21:16:45.596355   45911 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 21:16:45.596412   45911 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:16:45.600985   45911 out.go:177] 
	W0108 21:16:45.602847   45911 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0108 21:16:45.602871   45911 out.go:239] * 
	* 
	W0108 21:16:45.605270   45911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:16:45.606852   45911 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-879273 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-879273 -n old-k8s-version-879273
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-879273 -n old-k8s-version-879273: exit status 3 (18.585181536s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:17:04.192498   47407 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.130:22: connect: no route to host
	E0108 21:17:04.192524   47407 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.130:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-879273" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (141.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-879273 -n old-k8s-version-879273
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-879273 -n old-k8s-version-879273: exit status 3 (3.231784889s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:17:07.424515   47809 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.130:22: connect: no route to host
	E0108 21:17:07.424546   47809 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.130:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-879273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-879273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.146781542s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.130:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-879273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-879273 -n old-k8s-version-879273
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-879273 -n old-k8s-version-879273: exit status 3 (3.063379636s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:17:16.636409   47880 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.130:22: connect: no route to host
	E0108 21:17:16.636428   47880 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.130:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-879273" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (317.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-046839 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-046839 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (5m12.883025301s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-046839] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-046839 in cluster pause-046839
	* Updating the running kvm2 "pause-046839" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-046839" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:17:33.208951   48106 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:17:33.209079   48106 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:17:33.209092   48106 out.go:309] Setting ErrFile to fd 2...
	I0108 21:17:33.209100   48106 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:17:33.209345   48106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:17:33.209910   48106 out.go:303] Setting JSON to false
	I0108 21:17:33.210879   48106 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7177,"bootTime":1704741476,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:17:33.210937   48106 start.go:138] virtualization: kvm guest
	I0108 21:17:33.213821   48106 out.go:177] * [pause-046839] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:17:33.215551   48106 notify.go:220] Checking for updates...
	I0108 21:17:33.217146   48106 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:17:33.218690   48106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:17:33.220330   48106 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:17:33.222066   48106 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:17:33.223806   48106 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:17:33.225626   48106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:17:33.227641   48106 config.go:182] Loaded profile config "pause-046839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:17:33.228121   48106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:17:33.228188   48106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:17:33.243807   48106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0108 21:17:33.244296   48106 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:17:33.244947   48106 main.go:141] libmachine: Using API Version  1
	I0108 21:17:33.244988   48106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:17:33.245353   48106 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:17:33.245562   48106 main.go:141] libmachine: (pause-046839) Calling .DriverName
	I0108 21:17:33.245900   48106 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:17:33.246277   48106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:17:33.246316   48106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:17:33.260576   48106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38937
	I0108 21:17:33.261123   48106 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:17:33.261649   48106 main.go:141] libmachine: Using API Version  1
	I0108 21:17:33.261678   48106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:17:33.262064   48106 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:17:33.262313   48106 main.go:141] libmachine: (pause-046839) Calling .DriverName
	I0108 21:17:33.300472   48106 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 21:17:33.302505   48106 start.go:298] selected driver: kvm2
	I0108 21:17:33.302528   48106 start.go:902] validating driver "kvm2" against &{Name:pause-046839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:pause-046839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installe
r:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:17:33.302668   48106 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:17:33.303036   48106 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:17:33.303120   48106 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:17:33.319656   48106 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:17:33.320785   48106 cni.go:84] Creating CNI manager for ""
	I0108 21:17:33.320816   48106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:17:33.320839   48106 start_flags.go:323] config:
	{Name:pause-046839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-046839 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false por
tainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:17:33.321090   48106 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:17:33.323592   48106 out.go:177] * Starting control plane node pause-046839 in cluster pause-046839
	I0108 21:17:33.325480   48106 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:17:33.325939   48106 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:17:33.326005   48106 cache.go:56] Caching tarball of preloaded images
	I0108 21:17:33.326108   48106 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:17:33.326120   48106 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:17:33.326376   48106 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/config.json ...
	I0108 21:17:33.326638   48106 start.go:365] acquiring machines lock for pause-046839: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:21:52.214227   48106 start.go:369] acquired machines lock for "pause-046839" in 4m18.887550762s
	I0108 21:21:52.214275   48106 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:21:52.214290   48106 fix.go:54] fixHost starting: 
	I0108 21:21:52.214630   48106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:21:52.214665   48106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:21:52.228920   48106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32841
	I0108 21:21:52.229297   48106 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:21:52.229719   48106 main.go:141] libmachine: Using API Version  1
	I0108 21:21:52.229746   48106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:21:52.230155   48106 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:21:52.230319   48106 main.go:141] libmachine: (pause-046839) Calling .DriverName
	I0108 21:21:52.230434   48106 main.go:141] libmachine: (pause-046839) Calling .GetState
	I0108 21:21:52.231818   48106 fix.go:102] recreateIfNeeded on pause-046839: state=Running err=<nil>
	W0108 21:21:52.231851   48106 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:21:52.233609   48106 out.go:177] * Updating the running kvm2 "pause-046839" VM ...
	I0108 21:21:52.234782   48106 machine.go:88] provisioning docker machine ...
	I0108 21:21:52.234803   48106 main.go:141] libmachine: (pause-046839) Calling .DriverName
	I0108 21:21:52.235050   48106 main.go:141] libmachine: (pause-046839) Calling .GetMachineName
	I0108 21:21:52.235224   48106 buildroot.go:166] provisioning hostname "pause-046839"
	I0108 21:21:52.235243   48106 main.go:141] libmachine: (pause-046839) Calling .GetMachineName
	I0108 21:21:52.235398   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHHostname
	I0108 21:21:52.237792   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:52.238232   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:21:52.238265   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:52.238433   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHPort
	I0108 21:21:52.238594   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:52.238774   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:52.238939   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHUsername
	I0108 21:21:52.239153   48106 main.go:141] libmachine: Using SSH client type: native
	I0108 21:21:52.239534   48106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0108 21:21:52.239549   48106 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-046839 && echo "pause-046839" | sudo tee /etc/hostname
	I0108 21:21:52.398683   48106 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-046839
	
	I0108 21:21:52.398712   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHHostname
	I0108 21:21:52.401570   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:52.401946   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:21:52.401968   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:52.402134   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHPort
	I0108 21:21:52.402393   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:52.402566   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:52.402712   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHUsername
	I0108 21:21:52.402882   48106 main.go:141] libmachine: Using SSH client type: native
	I0108 21:21:52.403226   48106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0108 21:21:52.403254   48106 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-046839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-046839/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-046839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:21:52.537307   48106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:21:52.537339   48106 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 21:21:52.537389   48106 buildroot.go:174] setting up certificates
	I0108 21:21:52.537411   48106 provision.go:83] configureAuth start
	I0108 21:21:52.537428   48106 main.go:141] libmachine: (pause-046839) Calling .GetMachineName
	I0108 21:21:52.537703   48106 main.go:141] libmachine: (pause-046839) Calling .GetIP
	I0108 21:21:52.540084   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:52.540461   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:21:52.540490   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:52.540585   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHHostname
	I0108 21:21:52.542841   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:52.543171   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:21:52.543201   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:52.543334   48106 provision.go:138] copyHostCerts
	I0108 21:21:52.543398   48106 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 21:21:52.543408   48106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 21:21:52.543482   48106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 21:21:52.543580   48106 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 21:21:52.543590   48106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 21:21:52.543614   48106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 21:21:52.543680   48106 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 21:21:52.543687   48106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 21:21:52.543707   48106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 21:21:52.543754   48106 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.pause-046839 san=[192.168.72.74 192.168.72.74 localhost 127.0.0.1 minikube pause-046839]
	I0108 21:21:53.096997   48106 provision.go:172] copyRemoteCerts
	I0108 21:21:53.097054   48106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:21:53.097074   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHHostname
	I0108 21:21:53.099599   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:53.099945   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:21:53.099976   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:53.100159   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHPort
	I0108 21:21:53.100355   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:53.100480   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHUsername
	I0108 21:21:53.100598   48106 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/pause-046839/id_rsa Username:docker}
	I0108 21:21:53.197500   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 21:21:53.222825   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:21:53.249813   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 21:21:53.275578   48106 provision.go:86] duration metric: configureAuth took 738.151026ms
	I0108 21:21:53.275606   48106 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:21:53.275856   48106 config.go:182] Loaded profile config "pause-046839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:21:53.275928   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHHostname
	I0108 21:21:53.278702   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:53.279068   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:21:53.279113   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:53.279268   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHPort
	I0108 21:21:53.279508   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:53.279671   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:53.279813   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHUsername
	I0108 21:21:53.280036   48106 main.go:141] libmachine: Using SSH client type: native
	I0108 21:21:53.280380   48106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0108 21:21:53.280398   48106 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:21:58.891360   48106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:21:58.891390   48106 machine.go:91] provisioned docker machine in 6.65659515s
	I0108 21:21:58.891401   48106 start.go:300] post-start starting for "pause-046839" (driver="kvm2")
	I0108 21:21:58.891412   48106 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:21:58.891427   48106 main.go:141] libmachine: (pause-046839) Calling .DriverName
	I0108 21:21:58.891766   48106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:21:58.891786   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHHostname
	I0108 21:21:58.894881   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:58.895285   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:21:58.895313   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:58.895484   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHPort
	I0108 21:21:58.895699   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:58.895849   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHUsername
	I0108 21:21:58.895986   48106 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/pause-046839/id_rsa Username:docker}
	I0108 21:21:58.993335   48106 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:21:58.998834   48106 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:21:58.998861   48106 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 21:21:58.998927   48106 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 21:21:58.999012   48106 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 21:21:58.999120   48106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:21:59.007290   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 21:21:59.030315   48106 start.go:303] post-start completed in 138.901017ms
	I0108 21:21:59.030342   48106 fix.go:56] fixHost completed within 6.816060242s
	I0108 21:21:59.030360   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHHostname
	I0108 21:21:59.033288   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:59.033643   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:21:59.033672   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:59.033872   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHPort
	I0108 21:21:59.034084   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:59.034252   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:59.034388   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHUsername
	I0108 21:21:59.034549   48106 main.go:141] libmachine: Using SSH client type: native
	I0108 21:21:59.034934   48106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0108 21:21:59.034949   48106 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0108 21:21:59.168972   48106 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704748919.165479079
	
	I0108 21:21:59.168993   48106 fix.go:206] guest clock: 1704748919.165479079
	I0108 21:21:59.169000   48106 fix.go:219] Guest: 2024-01-08 21:21:59.165479079 +0000 UTC Remote: 2024-01-08 21:21:59.030345925 +0000 UTC m=+265.872741081 (delta=135.133154ms)
	I0108 21:21:59.169015   48106 fix.go:190] guest clock delta is within tolerance: 135.133154ms
	I0108 21:21:59.169019   48106 start.go:83] releasing machines lock for "pause-046839", held for 6.954760726s
	I0108 21:21:59.169043   48106 main.go:141] libmachine: (pause-046839) Calling .DriverName
	I0108 21:21:59.169320   48106 main.go:141] libmachine: (pause-046839) Calling .GetIP
	I0108 21:21:59.172401   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:59.172739   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:21:59.172782   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:59.172942   48106 main.go:141] libmachine: (pause-046839) Calling .DriverName
	I0108 21:21:59.173576   48106 main.go:141] libmachine: (pause-046839) Calling .DriverName
	I0108 21:21:59.173777   48106 main.go:141] libmachine: (pause-046839) Calling .DriverName
	I0108 21:21:59.173865   48106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:21:59.173904   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHHostname
	I0108 21:21:59.174041   48106 ssh_runner.go:195] Run: cat /version.json
	I0108 21:21:59.174068   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHHostname
	I0108 21:21:59.176979   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:59.177085   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:59.177332   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:21:59.177370   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:59.177490   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:21:59.177503   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHPort
	I0108 21:21:59.177516   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:21:59.177658   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:59.177733   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHPort
	I0108 21:21:59.177832   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHKeyPath
	I0108 21:21:59.177832   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHUsername
	I0108 21:21:59.177969   48106 main.go:141] libmachine: (pause-046839) Calling .GetSSHUsername
	I0108 21:21:59.178006   48106 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/pause-046839/id_rsa Username:docker}
	I0108 21:21:59.178150   48106 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/pause-046839/id_rsa Username:docker}
	I0108 21:21:59.269586   48106 ssh_runner.go:195] Run: systemctl --version
	I0108 21:21:59.296597   48106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:21:59.492573   48106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 21:21:59.512879   48106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:21:59.512968   48106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:21:59.733869   48106 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 21:21:59.733900   48106 start.go:475] detecting cgroup driver to use...
	I0108 21:21:59.733972   48106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:21:59.798680   48106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:21:59.830724   48106 docker.go:217] disabling cri-docker service (if available) ...
	I0108 21:21:59.830805   48106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:21:59.872583   48106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:21:59.903353   48106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:22:00.200986   48106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:22:00.529194   48106 docker.go:233] disabling docker service ...
	I0108 21:22:00.529273   48106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:22:00.565597   48106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:22:00.592015   48106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:22:00.917757   48106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:22:01.181360   48106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:22:01.195509   48106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:22:01.216913   48106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:22:01.216982   48106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:22:01.229073   48106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:22:01.229129   48106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:22:01.241370   48106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:22:01.253505   48106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:22:01.265229   48106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:22:01.277341   48106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:22:01.288280   48106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:22:01.297820   48106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:22:01.512661   48106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:22:02.659943   48106 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.147245932s)
	I0108 21:22:02.659975   48106 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:22:02.660041   48106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:22:02.665249   48106 start.go:543] Will wait 60s for crictl version
	I0108 21:22:02.665304   48106 ssh_runner.go:195] Run: which crictl
	I0108 21:22:02.668986   48106 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:22:02.710468   48106 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:22:02.710537   48106 ssh_runner.go:195] Run: crio --version
	I0108 21:22:02.761162   48106 ssh_runner.go:195] Run: crio --version
	I0108 21:22:02.815285   48106 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 21:22:02.816766   48106 main.go:141] libmachine: (pause-046839) Calling .GetIP
	I0108 21:22:02.819399   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:22:02.819711   48106 main.go:141] libmachine: (pause-046839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:40:35", ip: ""} in network mk-pause-046839: {Iface:virbr4 ExpiryTime:2024-01-08 22:16:42 +0000 UTC Type:0 Mac:52:54:00:51:40:35 Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:pause-046839 Clientid:01:52:54:00:51:40:35}
	I0108 21:22:02.819741   48106 main.go:141] libmachine: (pause-046839) DBG | domain pause-046839 has defined IP address 192.168.72.74 and MAC address 52:54:00:51:40:35 in network mk-pause-046839
	I0108 21:22:02.819937   48106 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0108 21:22:02.824500   48106 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:22:02.824542   48106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:22:02.871946   48106 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:22:02.871966   48106 crio.go:415] Images already preloaded, skipping extraction
	I0108 21:22:02.872017   48106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:22:02.907969   48106 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:22:02.907989   48106 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:22:02.908053   48106 ssh_runner.go:195] Run: crio config
	I0108 21:22:02.966450   48106 cni.go:84] Creating CNI manager for ""
	I0108 21:22:02.966470   48106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:22:02.966486   48106 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:22:02.966505   48106 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.74 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-046839 NodeName:pause-046839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:22:02.966632   48106 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-046839"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:22:02.966697   48106 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-046839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-046839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:22:02.966744   48106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:22:02.976696   48106 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:22:02.976758   48106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:22:02.984710   48106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0108 21:22:03.000635   48106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:22:03.016673   48106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0108 21:22:03.032624   48106 ssh_runner.go:195] Run: grep 192.168.72.74	control-plane.minikube.internal$ /etc/hosts
	I0108 21:22:03.036481   48106 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839 for IP: 192.168.72.74
	I0108 21:22:03.036513   48106 certs.go:190] acquiring lock for shared ca certs: {Name:mke01aa9d73e320a9a3907677cf29c75f0fa86d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:22:03.036674   48106 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key
	I0108 21:22:03.036710   48106 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key
	I0108 21:22:03.036775   48106 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/client.key
	I0108 21:22:03.036852   48106 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/apiserver.key.5cb7d44a
	I0108 21:22:03.036890   48106 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/proxy-client.key
	I0108 21:22:03.037004   48106 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem (1338 bytes)
	W0108 21:22:03.037033   48106 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896_empty.pem, impossibly tiny 0 bytes
	I0108 21:22:03.037043   48106 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:22:03.037064   48106 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem (1082 bytes)
	I0108 21:22:03.037085   48106 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:22:03.037111   48106 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem (1675 bytes)
	I0108 21:22:03.037148   48106 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem (1708 bytes)
	I0108 21:22:03.037682   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:22:03.061542   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:22:03.085641   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:22:03.109702   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:22:03.133035   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:22:03.157840   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 21:22:03.181226   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:22:03.202520   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:22:03.225150   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /usr/share/ca-certificates/178962.pem (1708 bytes)
	I0108 21:22:03.248082   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:22:03.269982   48106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem --> /usr/share/ca-certificates/17896.pem (1338 bytes)
	I0108 21:22:03.291796   48106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:22:03.308456   48106 ssh_runner.go:195] Run: openssl version
	I0108 21:22:03.314731   48106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178962.pem && ln -fs /usr/share/ca-certificates/178962.pem /etc/ssl/certs/178962.pem"
	I0108 21:22:03.326710   48106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178962.pem
	I0108 21:22:03.331543   48106 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 21:22:03.331587   48106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178962.pem
	I0108 21:22:03.337246   48106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178962.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:22:03.347953   48106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:22:03.359883   48106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:22:03.364857   48106 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:22:03.364936   48106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:22:03.370910   48106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:22:03.381606   48106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17896.pem && ln -fs /usr/share/ca-certificates/17896.pem /etc/ssl/certs/17896.pem"
	I0108 21:22:03.394473   48106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17896.pem
	I0108 21:22:03.399149   48106 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 21:22:03.399207   48106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17896.pem
	I0108 21:22:03.405024   48106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17896.pem /etc/ssl/certs/51391683.0"
	I0108 21:22:03.415253   48106 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:22:03.419930   48106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 21:22:03.425496   48106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 21:22:03.430999   48106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 21:22:03.436623   48106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 21:22:03.466136   48106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 21:22:03.485194   48106 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 21:22:03.529615   48106 kubeadm.go:404] StartCluster: {Name:pause-046839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:pause-046839 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu
-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:22:03.529735   48106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:22:03.529797   48106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:22:03.869541   48106 cri.go:89] found id: "1cf9f8a0511eee71f9c68209e63bb555b6338af22e6c873c3332b06d7abef9ce"
	I0108 21:22:03.869565   48106 cri.go:89] found id: "e72a200434a99c4c582c1bcf7674f552c1489ca3bad5cc11d51749cd756d7e50"
	I0108 21:22:03.869569   48106 cri.go:89] found id: "99f1995e0a3a7dba5385450cd5a509864b68b78aabd46f0a1fe5fee2eff47238"
	I0108 21:22:03.869573   48106 cri.go:89] found id: "6c29d633d222528a2efd8b97684a5d1b3ff2beaafc0635d7c40ad76fdcb6e5e9"
	I0108 21:22:03.869576   48106 cri.go:89] found id: "bf71a175c9ed7508cbb00d074a3126ae2f416d878d2c2bf2d92832647129093f"
	I0108 21:22:03.869585   48106 cri.go:89] found id: "f9e48a5932f27209b78257c9210c40dc44cbd67a5c884c0e4517cb74370c105c"
	I0108 21:22:03.869589   48106 cri.go:89] found id: ""
	I0108 21:22:03.869629   48106 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-046839 -n pause-046839
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-046839 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-046839 logs -n 25: (1.416237829s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-879273                              | old-k8s-version-879273    | jenkins | v1.32.0 | 08 Jan 24 21:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-626488 sudo                            | NoKubernetes-626488       | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC |                     |
	|         | systemctl is-active --quiet                            |                           |         |         |                     |                     |
	|         | service kubelet                                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-626488                                 | NoKubernetes-626488       | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	| start   | -p force-systemd-flag-162170                           | force-systemd-flag-162170 | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-631345                              | running-upgrade-631345    | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	| delete  | -p force-systemd-env-467534                            | force-systemd-env-467534  | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	| start   | -p cert-expiration-001550                              | cert-expiration-001550    | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:16 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p cert-options-686681                                 | cert-options-686681       | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:16 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-162170 ssh cat                      | force-systemd-flag-162170 | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-162170                           | force-systemd-flag-162170 | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	| start   | -p pause-046839 --memory=2048                          | pause-046839              | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:17 UTC |
	|         | --install-addons=false                                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                               |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| ssh     | cert-options-686681 ssh                                | cert-options-686681       | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:16 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-686681 -- sudo                         | cert-options-686681       | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:16 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-686681                                 | cert-options-686681       | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:16 UTC |
	| start   | -p no-preload-420119                                   | no-preload-420119         | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-879273             | old-k8s-version-879273    | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-879273                              | old-k8s-version-879273    | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                           |         |         |                     |                     |
	| start   | -p pause-046839                                        | pause-046839              | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC | 08 Jan 24 21:22 UTC |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-420119             | no-preload-420119         | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-420119                                   | no-preload-420119         | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| start   | -p cert-expiration-001550                              | cert-expiration-001550    | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:22 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-420119                  | no-preload-420119         | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-420119                                   | no-preload-420119         | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-001550                              | cert-expiration-001550    | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	| start   | -p embed-certs-930023                                  | embed-certs-930023        | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:22:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:22:24.816132   49818 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:22:24.816270   49818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:22:24.816279   49818 out.go:309] Setting ErrFile to fd 2...
	I0108 21:22:24.816283   49818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:22:24.816479   49818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:22:24.817050   49818 out.go:303] Setting JSON to false
	I0108 21:22:24.817915   49818 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7469,"bootTime":1704741476,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:22:24.817973   49818 start.go:138] virtualization: kvm guest
	I0108 21:22:24.820677   49818 out.go:177] * [embed-certs-930023] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:22:24.822580   49818 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:22:24.822589   49818 notify.go:220] Checking for updates...
	I0108 21:22:24.824272   49818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:22:24.826097   49818 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:22:24.827687   49818 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:22:24.829231   49818 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:22:24.830951   49818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:22:24.833138   49818 config.go:182] Loaded profile config "no-preload-420119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:22:24.833266   49818 config.go:182] Loaded profile config "old-k8s-version-879273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 21:22:24.833394   49818 config.go:182] Loaded profile config "pause-046839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:22:24.833474   49818 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:22:24.870126   49818 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 21:22:24.871501   49818 start.go:298] selected driver: kvm2
	I0108 21:22:24.871517   49818 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:22:24.871530   49818 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:22:24.872252   49818 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:22:24.872347   49818 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:22:24.886741   49818 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:22:24.886788   49818 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 21:22:24.886979   49818 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:22:24.887035   49818 cni.go:84] Creating CNI manager for ""
	I0108 21:22:24.887047   49818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:22:24.887059   49818 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 21:22:24.887067   49818 start_flags.go:323] config:
	{Name:embed-certs-930023 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-930023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:22:24.887203   49818 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:22:24.889338   49818 out.go:177] * Starting control plane node embed-certs-930023 in cluster embed-certs-930023
	I0108 21:22:23.552495   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:27.230201   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:22:27.230229   48106 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:22:27.230245   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:27.284597   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:22:27.284625   48106 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:22:27.552043   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:27.557416   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:22:27.557448   48106 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:22:28.052004   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:28.057102   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:22:28.057132   48106 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:22:28.552921   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:28.568357   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:22:28.568392   48106 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:22:29.051931   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:29.057212   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 200:
	ok
	I0108 21:22:29.066101   48106 api_server.go:141] control plane version: v1.28.4
	I0108 21:22:29.066140   48106 api_server.go:131] duration metric: took 6.014267249s to wait for apiserver health ...
	I0108 21:22:29.066150   48106 cni.go:84] Creating CNI manager for ""
	I0108 21:22:29.066159   48106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:22:29.068296   48106 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 21:22:24.890813   49818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:22:24.890851   49818 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:22:24.890859   49818 cache.go:56] Caching tarball of preloaded images
	I0108 21:22:24.890935   49818 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:22:24.890945   49818 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:22:24.891032   49818 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/config.json ...
	I0108 21:22:24.891048   49818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/config.json: {Name:mkdc54aa447c8da5b5aed4fc0de1cc18d12155c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:22:24.891170   49818 start.go:365] acquiring machines lock for embed-certs-930023: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:22:27.260329   49554 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.226:22: connect: no route to host
	I0108 21:22:30.332383   49554 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.226:22: connect: no route to host
	I0108 21:22:29.069844   48106 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 21:22:29.079272   48106 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 21:22:29.097596   48106 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:22:29.110551   48106 system_pods.go:59] 6 kube-system pods found
	I0108 21:22:29.110591   48106 system_pods.go:61] "coredns-5dd5756b68-sqb52" [9af4e26a-25dc-4ac5-b6e3-d2532a643391] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 21:22:29.110607   48106 system_pods.go:61] "etcd-pause-046839" [d2e4d0a0-9053-424f-9758-dda322538df8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:22:29.110618   48106 system_pods.go:61] "kube-apiserver-pause-046839" [6ee06cd7-be94-49f7-9b93-83c2d1fe9629] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 21:22:29.110637   48106 system_pods.go:61] "kube-controller-manager-pause-046839" [b09c7542-31c5-4e44-91a9-5a1989ceb3b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:22:29.110647   48106 system_pods.go:61] "kube-proxy-66j2k" [e7615d32-a6f2-461d-b804-930d11feddf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:22:29.110659   48106 system_pods.go:61] "kube-scheduler-pause-046839" [4e0540b6-7e0b-49c4-b7be-a7ba6269293d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:22:29.110668   48106 system_pods.go:74] duration metric: took 13.051063ms to wait for pod list to return data ...
	I0108 21:22:29.110679   48106 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:22:29.114581   48106 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:22:29.114604   48106 node_conditions.go:123] node cpu capacity is 2
	I0108 21:22:29.114614   48106 node_conditions.go:105] duration metric: took 3.93116ms to run NodePressure ...
	I0108 21:22:29.114633   48106 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:22:29.351408   48106 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 21:22:29.357006   48106 kubeadm.go:787] kubelet initialised
	I0108 21:22:29.357027   48106 kubeadm.go:788] duration metric: took 5.59273ms waiting for restarted kubelet to initialise ...
	I0108 21:22:29.357034   48106 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:22:29.361886   48106 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:29.369614   48106 pod_ready.go:92] pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:29.369637   48106 pod_ready.go:81] duration metric: took 7.721589ms waiting for pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:29.369648   48106 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:31.375804   48106 pod_ready.go:102] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:36.412354   49554 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.226:22: connect: no route to host
	I0108 21:22:33.377871   48106 pod_ready.go:102] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:35.876962   48106 pod_ready.go:102] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:39.484337   49554 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.226:22: connect: no route to host
	I0108 21:22:38.377766   48106 pod_ready.go:102] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:40.876649   48106 pod_ready.go:102] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:42.376910   48106 pod_ready.go:92] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:42.376934   48106 pod_ready.go:81] duration metric: took 13.007276303s waiting for pod "etcd-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.376944   48106 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.383206   48106 pod_ready.go:92] pod "kube-apiserver-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:42.383225   48106 pod_ready.go:81] duration metric: took 6.274766ms waiting for pod "kube-apiserver-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.383233   48106 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.388310   48106 pod_ready.go:92] pod "kube-controller-manager-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:42.388329   48106 pod_ready.go:81] duration metric: took 5.090216ms waiting for pod "kube-controller-manager-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.388337   48106 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-66j2k" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.394273   48106 pod_ready.go:92] pod "kube-proxy-66j2k" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:42.394294   48106 pod_ready.go:81] duration metric: took 5.949412ms waiting for pod "kube-proxy-66j2k" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.394304   48106 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.399516   48106 pod_ready.go:92] pod "kube-scheduler-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:42.399538   48106 pod_ready.go:81] duration metric: took 5.227845ms waiting for pod "kube-scheduler-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.399546   48106 pod_ready.go:38] duration metric: took 13.042504384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:22:42.399566   48106 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:22:42.411839   48106 ops.go:34] apiserver oom_adj: -16
	I0108 21:22:42.411864   48106 kubeadm.go:640] restartCluster took 38.460488583s
	I0108 21:22:42.411873   48106 kubeadm.go:406] StartCluster complete in 38.882266124s
	I0108 21:22:42.411892   48106 settings.go:142] acquiring lock: {Name:mk91d3baf51872e4bb0758b94fca7c7249bb9666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:22:42.411980   48106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:22:42.413263   48106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/kubeconfig: {Name:mkeb2e8a20e31c0c2d5c7e8214a27af3141300ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:22:42.413531   48106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:22:42.413619   48106 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:22:42.413750   48106 config.go:182] Loaded profile config "pause-046839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:22:42.415833   48106 out.go:177] * Enabled addons: 
	I0108 21:22:42.414535   48106 kapi.go:59] client config for pause-046839: &rest.Config{Host:"https://192.168.72.74:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:22:42.417417   48106 addons.go:508] enable addons completed in 3.802684ms: enabled=[]
	I0108 21:22:42.420602   48106 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-046839" context rescaled to 1 replicas
	I0108 21:22:42.420632   48106 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:22:42.422488   48106 out.go:177] * Verifying Kubernetes components...
	I0108 21:22:42.423977   48106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:22:42.524732   48106 node_ready.go:35] waiting up to 6m0s for node "pause-046839" to be "Ready" ...
	I0108 21:22:42.524754   48106 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 21:22:42.575242   48106 node_ready.go:49] node "pause-046839" has status "Ready":"True"
	I0108 21:22:42.575265   48106 node_ready.go:38] duration metric: took 50.504316ms waiting for node "pause-046839" to be "Ready" ...
	I0108 21:22:42.575277   48106 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:22:42.776065   48106 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.173324   48106 pod_ready.go:92] pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:43.173346   48106 pod_ready.go:81] duration metric: took 397.256083ms waiting for pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.173357   48106 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.575700   48106 pod_ready.go:92] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:43.575726   48106 pod_ready.go:81] duration metric: took 402.362326ms waiting for pod "etcd-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.575738   48106 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.974565   48106 pod_ready.go:92] pod "kube-apiserver-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:43.974587   48106 pod_ready.go:81] duration metric: took 398.842314ms waiting for pod "kube-apiserver-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.974598   48106 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:44.374446   48106 pod_ready.go:92] pod "kube-controller-manager-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:44.374468   48106 pod_ready.go:81] duration metric: took 399.863823ms waiting for pod "kube-controller-manager-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:44.374477   48106 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-66j2k" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:44.774007   48106 pod_ready.go:92] pod "kube-proxy-66j2k" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:44.774030   48106 pod_ready.go:81] duration metric: took 399.546593ms waiting for pod "kube-proxy-66j2k" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:44.774038   48106 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:45.174913   48106 pod_ready.go:92] pod "kube-scheduler-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:45.174936   48106 pod_ready.go:81] duration metric: took 400.891464ms waiting for pod "kube-scheduler-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:45.174943   48106 pod_ready.go:38] duration metric: took 2.599658286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:22:45.174956   48106 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:22:45.175001   48106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:22:45.187531   48106 api_server.go:72] duration metric: took 2.766873053s to wait for apiserver process to appear ...
	I0108 21:22:45.187561   48106 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:22:45.187581   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:45.192625   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 200:
	ok
	I0108 21:22:45.193919   48106 api_server.go:141] control plane version: v1.28.4
	I0108 21:22:45.193940   48106 api_server.go:131] duration metric: took 6.374571ms to wait for apiserver health ...
	I0108 21:22:45.193948   48106 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:22:45.376592   48106 system_pods.go:59] 6 kube-system pods found
	I0108 21:22:45.376617   48106 system_pods.go:61] "coredns-5dd5756b68-sqb52" [9af4e26a-25dc-4ac5-b6e3-d2532a643391] Running
	I0108 21:22:45.376622   48106 system_pods.go:61] "etcd-pause-046839" [d2e4d0a0-9053-424f-9758-dda322538df8] Running
	I0108 21:22:45.376626   48106 system_pods.go:61] "kube-apiserver-pause-046839" [6ee06cd7-be94-49f7-9b93-83c2d1fe9629] Running
	I0108 21:22:45.376631   48106 system_pods.go:61] "kube-controller-manager-pause-046839" [b09c7542-31c5-4e44-91a9-5a1989ceb3b7] Running
	I0108 21:22:45.376634   48106 system_pods.go:61] "kube-proxy-66j2k" [e7615d32-a6f2-461d-b804-930d11feddf3] Running
	I0108 21:22:45.376638   48106 system_pods.go:61] "kube-scheduler-pause-046839" [4e0540b6-7e0b-49c4-b7be-a7ba6269293d] Running
	I0108 21:22:45.376643   48106 system_pods.go:74] duration metric: took 182.690214ms to wait for pod list to return data ...
	I0108 21:22:45.376650   48106 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:22:45.574664   48106 default_sa.go:45] found service account: "default"
	I0108 21:22:45.574687   48106 default_sa.go:55] duration metric: took 198.031923ms for default service account to be created ...
	I0108 21:22:45.574695   48106 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:22:45.776845   48106 system_pods.go:86] 6 kube-system pods found
	I0108 21:22:45.776870   48106 system_pods.go:89] "coredns-5dd5756b68-sqb52" [9af4e26a-25dc-4ac5-b6e3-d2532a643391] Running
	I0108 21:22:45.776880   48106 system_pods.go:89] "etcd-pause-046839" [d2e4d0a0-9053-424f-9758-dda322538df8] Running
	I0108 21:22:45.776885   48106 system_pods.go:89] "kube-apiserver-pause-046839" [6ee06cd7-be94-49f7-9b93-83c2d1fe9629] Running
	I0108 21:22:45.776889   48106 system_pods.go:89] "kube-controller-manager-pause-046839" [b09c7542-31c5-4e44-91a9-5a1989ceb3b7] Running
	I0108 21:22:45.776893   48106 system_pods.go:89] "kube-proxy-66j2k" [e7615d32-a6f2-461d-b804-930d11feddf3] Running
	I0108 21:22:45.776897   48106 system_pods.go:89] "kube-scheduler-pause-046839" [4e0540b6-7e0b-49c4-b7be-a7ba6269293d] Running
	I0108 21:22:45.776903   48106 system_pods.go:126] duration metric: took 202.2028ms to wait for k8s-apps to be running ...
	I0108 21:22:45.776909   48106 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:22:45.776952   48106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:22:45.790067   48106 system_svc.go:56] duration metric: took 13.144691ms WaitForService to wait for kubelet.
	I0108 21:22:45.790096   48106 kubeadm.go:581] duration metric: took 3.369446487s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:22:45.790113   48106 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:22:45.974030   48106 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:22:45.974058   48106 node_conditions.go:123] node cpu capacity is 2
	I0108 21:22:45.974068   48106 node_conditions.go:105] duration metric: took 183.950628ms to run NodePressure ...
	I0108 21:22:45.974078   48106 start.go:228] waiting for startup goroutines ...
	I0108 21:22:45.974084   48106 start.go:233] waiting for cluster config update ...
	I0108 21:22:45.974090   48106 start.go:242] writing updated cluster config ...
	I0108 21:22:45.974349   48106 ssh_runner.go:195] Run: rm -f paused
	I0108 21:22:46.020658   48106 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:22:46.023280   48106 out.go:177] * Done! kubectl is now configured to use "pause-046839" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:16:38 UTC, ends at Mon 2024-01-08 21:22:46 UTC. --
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.737580148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748966737555884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=9458e1f2-b99d-4e3a-9deb-3c206c4e286b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.738319070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4ccfe1c0-5c3c-47fc-a42c-09ab7080dd4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.738367742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4ccfe1c0-5c3c-47fc-a42c-09ab7080dd4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.738622064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c48fbf35495079c85076c2aa65479dbe2cb5ac1e1d58ee9acc34f2055678c0e,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704748948340818606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1724a578869bd4afc39040fb96805aff8c438683be6ce75fcd1f04eb92d178,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704748948314127462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash: f994b237,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854f808e04cd67296ae3625cc90f6564702fa4adecef4999912320c5ca5c9533,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704748941752570047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e
63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b964404f73a9b54fbea9bd8faae45e3cfb06251333f585bfd94a53d7c1c3c44,PodSandboxId:28def94e5e6c5a6fdbf7d52d844319df917492caced6143cecec795f58f1d9f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704748941726024030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e2670121e812a0e82ffcac2687cf4297c4ec4bce70a4793356c67050f8fd78,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704748941671398790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28835e43ff4219538f6d7e8713bfa65f4ae1b9e736725524f06819d0e2b588a4,PodSandboxId:82513df00f6b76f768764e73433e7a520442919063467bed95463142f7229633,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704748941693482960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04ddc1b5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704748926148337354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash:
f994b237,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704748925243885893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca96751246c22a1a430c5a792b4130efbeb05e4f62eb3785bb0c3cea59c265d,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704748924717132999,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97475594be55bfb2856752c1907c0fbce57563cff435b2a719090fb47f9610e9,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704748924607999578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9f8a0511eee71f9c68209e63bb555b6338af22e6c873c3332b06d7abef9ce,PodSandboxId:4585fde47a7d201514d2f8a3cc15620f0da5ee70c78bc34ff409487ccdba2849,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704748920722532559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04d
dc1b5,},Annotations:map[string]string{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a200434a99c4c582c1bcf7674f552c1489ca3bad5cc11d51749cd756d7e50,PodSandboxId:c550bf6838cea38566320b1acd60f4383347b736c54a52db11608f5d323d317e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1704748920345068243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4ccfe1c0-5c3c-47fc-a42c-09ab7080dd4c name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.781495959Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d11268f2-51ce-4cb5-8fcc-b92f72c7fd21 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.781581870Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d11268f2-51ce-4cb5-8fcc-b92f72c7fd21 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.782970300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=91f1c6a1-45fc-4b3f-ac1d-870250e24ce2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.783396439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748966783379686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=91f1c6a1-45fc-4b3f-ac1d-870250e24ce2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.784621321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6ed97e98-86dd-43cf-8e73-e2c018d2ae9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.784676410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6ed97e98-86dd-43cf-8e73-e2c018d2ae9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.784920022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c48fbf35495079c85076c2aa65479dbe2cb5ac1e1d58ee9acc34f2055678c0e,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704748948340818606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1724a578869bd4afc39040fb96805aff8c438683be6ce75fcd1f04eb92d178,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704748948314127462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash: f994b237,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854f808e04cd67296ae3625cc90f6564702fa4adecef4999912320c5ca5c9533,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704748941752570047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e
63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b964404f73a9b54fbea9bd8faae45e3cfb06251333f585bfd94a53d7c1c3c44,PodSandboxId:28def94e5e6c5a6fdbf7d52d844319df917492caced6143cecec795f58f1d9f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704748941726024030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e2670121e812a0e82ffcac2687cf4297c4ec4bce70a4793356c67050f8fd78,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704748941671398790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28835e43ff4219538f6d7e8713bfa65f4ae1b9e736725524f06819d0e2b588a4,PodSandboxId:82513df00f6b76f768764e73433e7a520442919063467bed95463142f7229633,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704748941693482960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04ddc1b5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704748926148337354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash:
f994b237,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704748925243885893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca96751246c22a1a430c5a792b4130efbeb05e4f62eb3785bb0c3cea59c265d,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704748924717132999,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97475594be55bfb2856752c1907c0fbce57563cff435b2a719090fb47f9610e9,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704748924607999578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9f8a0511eee71f9c68209e63bb555b6338af22e6c873c3332b06d7abef9ce,PodSandboxId:4585fde47a7d201514d2f8a3cc15620f0da5ee70c78bc34ff409487ccdba2849,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704748920722532559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04d
dc1b5,},Annotations:map[string]string{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a200434a99c4c582c1bcf7674f552c1489ca3bad5cc11d51749cd756d7e50,PodSandboxId:c550bf6838cea38566320b1acd60f4383347b736c54a52db11608f5d323d317e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1704748920345068243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6ed97e98-86dd-43cf-8e73-e2c018d2ae9e name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.831714332Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=33e8a4ff-1886-47fd-bfba-3397bee741dd name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.831771594Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=33e8a4ff-1886-47fd-bfba-3397bee741dd name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.833693286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=59b7eb98-4c51-4401-854a-b256cb349613 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.834086734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748966834073798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=59b7eb98-4c51-4401-854a-b256cb349613 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.834856413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=972f31ef-ee03-4f47-a311-0b023e7c3f06 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.834903491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=972f31ef-ee03-4f47-a311-0b023e7c3f06 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.835134078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c48fbf35495079c85076c2aa65479dbe2cb5ac1e1d58ee9acc34f2055678c0e,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704748948340818606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1724a578869bd4afc39040fb96805aff8c438683be6ce75fcd1f04eb92d178,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704748948314127462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash: f994b237,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854f808e04cd67296ae3625cc90f6564702fa4adecef4999912320c5ca5c9533,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704748941752570047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e
63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b964404f73a9b54fbea9bd8faae45e3cfb06251333f585bfd94a53d7c1c3c44,PodSandboxId:28def94e5e6c5a6fdbf7d52d844319df917492caced6143cecec795f58f1d9f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704748941726024030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e2670121e812a0e82ffcac2687cf4297c4ec4bce70a4793356c67050f8fd78,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704748941671398790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28835e43ff4219538f6d7e8713bfa65f4ae1b9e736725524f06819d0e2b588a4,PodSandboxId:82513df00f6b76f768764e73433e7a520442919063467bed95463142f7229633,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704748941693482960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04ddc1b5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704748926148337354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash:
f994b237,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704748925243885893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca96751246c22a1a430c5a792b4130efbeb05e4f62eb3785bb0c3cea59c265d,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704748924717132999,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97475594be55bfb2856752c1907c0fbce57563cff435b2a719090fb47f9610e9,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704748924607999578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9f8a0511eee71f9c68209e63bb555b6338af22e6c873c3332b06d7abef9ce,PodSandboxId:4585fde47a7d201514d2f8a3cc15620f0da5ee70c78bc34ff409487ccdba2849,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704748920722532559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04d
dc1b5,},Annotations:map[string]string{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a200434a99c4c582c1bcf7674f552c1489ca3bad5cc11d51749cd756d7e50,PodSandboxId:c550bf6838cea38566320b1acd60f4383347b736c54a52db11608f5d323d317e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1704748920345068243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=972f31ef-ee03-4f47-a311-0b023e7c3f06 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.878774853Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9dfea453-1e87-42a5-979c-a7fcd589f162 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.878832590Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9dfea453-1e87-42a5-979c-a7fcd589f162 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.879825148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=eefb56bb-29fb-4040-9987-cc9e9ae22c88 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.880156171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748966880144940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=eefb56bb-29fb-4040-9987-cc9e9ae22c88 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.880970282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=db535ae0-6f5f-41b0-99c7-547f9dea0c3d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.881018413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=db535ae0-6f5f-41b0-99c7-547f9dea0c3d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:46 pause-046839 crio[2457]: time="2024-01-08 21:22:46.881370612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c48fbf35495079c85076c2aa65479dbe2cb5ac1e1d58ee9acc34f2055678c0e,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704748948340818606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1724a578869bd4afc39040fb96805aff8c438683be6ce75fcd1f04eb92d178,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704748948314127462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash: f994b237,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854f808e04cd67296ae3625cc90f6564702fa4adecef4999912320c5ca5c9533,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704748941752570047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e
63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b964404f73a9b54fbea9bd8faae45e3cfb06251333f585bfd94a53d7c1c3c44,PodSandboxId:28def94e5e6c5a6fdbf7d52d844319df917492caced6143cecec795f58f1d9f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704748941726024030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e2670121e812a0e82ffcac2687cf4297c4ec4bce70a4793356c67050f8fd78,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704748941671398790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28835e43ff4219538f6d7e8713bfa65f4ae1b9e736725524f06819d0e2b588a4,PodSandboxId:82513df00f6b76f768764e73433e7a520442919063467bed95463142f7229633,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704748941693482960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04ddc1b5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704748926148337354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash:
f994b237,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704748925243885893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca96751246c22a1a430c5a792b4130efbeb05e4f62eb3785bb0c3cea59c265d,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704748924717132999,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97475594be55bfb2856752c1907c0fbce57563cff435b2a719090fb47f9610e9,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704748924607999578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9f8a0511eee71f9c68209e63bb555b6338af22e6c873c3332b06d7abef9ce,PodSandboxId:4585fde47a7d201514d2f8a3cc15620f0da5ee70c78bc34ff409487ccdba2849,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704748920722532559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04d
dc1b5,},Annotations:map[string]string{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a200434a99c4c582c1bcf7674f552c1489ca3bad5cc11d51749cd756d7e50,PodSandboxId:c550bf6838cea38566320b1acd60f4383347b736c54a52db11608f5d323d317e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1704748920345068243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=db535ae0-6f5f-41b0-99c7-547f9dea0c3d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c48fbf354950       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago      Running             coredns                   2                   59d85f6aaf4a7       coredns-5dd5756b68-sqb52
	2b1724a578869       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   18 seconds ago      Running             kube-proxy                2                   1792283600f96       kube-proxy-66j2k
	854f808e04cd6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   25 seconds ago      Running             kube-scheduler            2                   8d2c73e139c4e       kube-scheduler-pause-046839
	2b964404f73a9       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   25 seconds ago      Running             kube-controller-manager   2                   28def94e5e6c5       kube-controller-manager-pause-046839
	28835e43ff421       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   25 seconds ago      Running             kube-apiserver            2                   82513df00f6b7       kube-apiserver-pause-046839
	f3e2670121e81       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   25 seconds ago      Running             etcd                      2                   41efcad319c14       etcd-pause-046839
	289e84ca8abe4       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   40 seconds ago      Exited              kube-proxy                1                   1792283600f96       kube-proxy-66j2k
	7984e52ff6bea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   41 seconds ago      Exited              coredns                   1                   59d85f6aaf4a7       coredns-5dd5756b68-sqb52
	9ca96751246c2       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   42 seconds ago      Exited              kube-scheduler            1                   8d2c73e139c4e       kube-scheduler-pause-046839
	97475594be55b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   42 seconds ago      Exited              etcd                      1                   41efcad319c14       etcd-pause-046839
	1cf9f8a0511ee       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   46 seconds ago      Exited              kube-apiserver            1                   4585fde47a7d2       kube-apiserver-pause-046839
	e72a200434a99       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   46 seconds ago      Exited              kube-controller-manager   1                   c550bf6838cea       kube-controller-manager-pause-046839
	
	
	==> coredns [1c48fbf35495079c85076c2aa65479dbe2cb5ac1e1d58ee9acc34f2055678c0e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50914 - 12076 "HINFO IN 7386635121953303377.3056170504652277233. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018503851s
	
	
	==> coredns [7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32838 - 1797 "HINFO IN 4888549285694502217.5742732887607354119. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016783024s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-046839
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-046839
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=pause-046839
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_17_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:17:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-046839
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:22:27 +0000   Mon, 08 Jan 2024 21:17:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:22:27 +0000   Mon, 08 Jan 2024 21:17:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:22:27 +0000   Mon, 08 Jan 2024 21:17:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:22:27 +0000   Mon, 08 Jan 2024 21:17:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.74
	  Hostname:    pause-046839
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 1529db4c73c643f285f32c36a62fa35a
	  System UUID:                1529db4c-73c6-43f2-85f3-2c36a62fa35a
	  Boot ID:                    d4be94e9-06e9-4b6d-a4ef-533587072111
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-sqb52                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m20s
	  kube-system                 etcd-pause-046839                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m33s
	  kube-system                 kube-apiserver-pause-046839             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m33s
	  kube-system                 kube-controller-manager-pause-046839    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 kube-proxy-66j2k                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-scheduler-pause-046839             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 5m17s              kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeAllocatableEnforced  5m33s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m33s              kubelet          Node pause-046839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s              kubelet          Node pause-046839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s              kubelet          Node pause-046839 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m33s              kubelet          Starting kubelet.
	  Normal  NodeReady                5m32s              kubelet          Node pause-046839 status is now: NodeReady
	  Normal  RegisteredNode           5m20s              node-controller  Node pause-046839 event: Registered Node pause-046839 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-046839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-046839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-046839 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-046839 event: Registered Node pause-046839 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069487] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.819518] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.701081] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.176357] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.158522] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.459772] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.120782] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.165380] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.131142] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.251629] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[Jan 8 21:17] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +9.797667] systemd-fstab-generator[1272]: Ignoring "noauto" for root device
	[Jan 8 21:21] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.575826] systemd-fstab-generator[2242]: Ignoring "noauto" for root device
	[  +0.283362] systemd-fstab-generator[2263]: Ignoring "noauto" for root device
	[Jan 8 21:22] systemd-fstab-generator[2306]: Ignoring "noauto" for root device
	[  +0.304706] systemd-fstab-generator[2334]: Ignoring "noauto" for root device
	[  +0.336576] systemd-fstab-generator[2357]: Ignoring "noauto" for root device
	[ +19.371588] systemd-fstab-generator[3346]: Ignoring "noauto" for root device
	[  +8.126258] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [97475594be55bfb2856752c1907c0fbce57563cff435b2a719090fb47f9610e9] <==
	{"level":"info","ts":"2024-01-08T21:22:05.867455Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.74:2380"}
	{"level":"info","ts":"2024-01-08T21:22:06.933304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-08T21:22:06.933414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-08T21:22:06.933475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 received MsgPreVoteResp from 95b23b111ac7b7c0 at term 2"}
	{"level":"info","ts":"2024-01-08T21:22:06.933514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became candidate at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:06.933542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 received MsgVoteResp from 95b23b111ac7b7c0 at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:06.933571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became leader at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:06.933604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 95b23b111ac7b7c0 elected leader 95b23b111ac7b7c0 at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:06.936481Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"95b23b111ac7b7c0","local-member-attributes":"{Name:pause-046839 ClientURLs:[https://192.168.72.74:2379]}","request-path":"/0/members/95b23b111ac7b7c0/attributes","cluster-id":"62ce46a6f5a5249c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:22:06.936729Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:22:06.939289Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:22:06.939436Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:22:06.940649Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.74:2379"}
	{"level":"info","ts":"2024-01-08T21:22:06.946155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:22:06.94634Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:22:19.54815Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-01-08T21:22:19.548378Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-046839","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.74:2380"],"advertise-client-urls":["https://192.168.72.74:2379"]}
	{"level":"warn","ts":"2024-01-08T21:22:19.548646Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-08T21:22:19.548717Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-08T21:22:19.550629Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.74:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-08T21:22:19.550683Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.74:2379: use of closed network connection"}
	{"level":"info","ts":"2024-01-08T21:22:19.55074Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"95b23b111ac7b7c0","current-leader-member-id":"95b23b111ac7b7c0"}
	{"level":"info","ts":"2024-01-08T21:22:19.555564Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.72.74:2380"}
	{"level":"info","ts":"2024-01-08T21:22:19.555738Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.72.74:2380"}
	{"level":"info","ts":"2024-01-08T21:22:19.555771Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-046839","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.74:2380"],"advertise-client-urls":["https://192.168.72.74:2379"]}
	
	
	==> etcd [f3e2670121e812a0e82ffcac2687cf4297c4ec4bce70a4793356c67050f8fd78] <==
	{"level":"info","ts":"2024-01-08T21:22:24.166643Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:22:24.166676Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:22:24.166527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 switched to configuration voters=(10786749002155538368)"}
	{"level":"info","ts":"2024-01-08T21:22:24.167746Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"62ce46a6f5a5249c","local-member-id":"95b23b111ac7b7c0","added-peer-id":"95b23b111ac7b7c0","added-peer-peer-urls":["https://192.168.72.74:2380"]}
	{"level":"info","ts":"2024-01-08T21:22:24.17309Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T21:22:24.174743Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"95b23b111ac7b7c0","initial-advertise-peer-urls":["https://192.168.72.74:2380"],"listen-peer-urls":["https://192.168.72.74:2380"],"advertise-client-urls":["https://192.168.72.74:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.74:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T21:22:24.174879Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T21:22:24.173352Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"62ce46a6f5a5249c","local-member-id":"95b23b111ac7b7c0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:22:24.175143Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:22:24.173594Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.74:2380"}
	{"level":"info","ts":"2024-01-08T21:22:24.183272Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.74:2380"}
	{"level":"info","ts":"2024-01-08T21:22:25.809724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 is starting a new election at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:25.809799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:25.809834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 received MsgPreVoteResp from 95b23b111ac7b7c0 at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:25.809849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became candidate at term 4"}
	{"level":"info","ts":"2024-01-08T21:22:25.809854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 received MsgVoteResp from 95b23b111ac7b7c0 at term 4"}
	{"level":"info","ts":"2024-01-08T21:22:25.809862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became leader at term 4"}
	{"level":"info","ts":"2024-01-08T21:22:25.809886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 95b23b111ac7b7c0 elected leader 95b23b111ac7b7c0 at term 4"}
	{"level":"info","ts":"2024-01-08T21:22:25.815657Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"95b23b111ac7b7c0","local-member-attributes":"{Name:pause-046839 ClientURLs:[https://192.168.72.74:2379]}","request-path":"/0/members/95b23b111ac7b7c0/attributes","cluster-id":"62ce46a6f5a5249c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:22:25.815676Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:22:25.8159Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:22:25.815943Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:22:25.815695Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:22:25.816994Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:22:25.817274Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.74:2379"}
	
	
	==> kernel <==
	 21:22:47 up 6 min,  0 users,  load average: 1.33, 0.70, 0.32
	Linux pause-046839 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [1cf9f8a0511eee71f9c68209e63bb555b6338af22e6c873c3332b06d7abef9ce] <==
	
	
	==> kube-apiserver [28835e43ff4219538f6d7e8713bfa65f4ae1b9e736725524f06819d0e2b588a4] <==
	I0108 21:22:27.274689       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0108 21:22:27.274704       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0108 21:22:27.274773       1 controller.go:116] Starting legacy_token_tracking_controller
	I0108 21:22:27.274804       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0108 21:22:27.295408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:22:27.352819       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 21:22:27.374054       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0108 21:22:27.374163       1 aggregator.go:166] initial CRD sync complete...
	I0108 21:22:27.374289       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 21:22:27.374320       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 21:22:27.374345       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:22:27.375708       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 21:22:27.387316       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0108 21:22:27.387395       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0108 21:22:27.392095       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:22:27.392332       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:22:27.395272       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0108 21:22:28.191435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:22:29.210303       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 21:22:29.221775       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 21:22:29.281864       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 21:22:29.329320       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:22:29.339626       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:22:40.567770       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:22:40.616804       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2b964404f73a9b54fbea9bd8faae45e3cfb06251333f585bfd94a53d7c1c3c44] <==
	I0108 21:22:40.413569       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0108 21:22:40.414091       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0108 21:22:40.414367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="231.198µs"
	I0108 21:22:40.414701       1 shared_informer.go:318] Caches are synced for endpoint
	I0108 21:22:40.416268       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0108 21:22:40.416313       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0108 21:22:40.419040       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0108 21:22:40.419090       1 shared_informer.go:318] Caches are synced for namespace
	I0108 21:22:40.419606       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0108 21:22:40.431621       1 shared_informer.go:318] Caches are synced for ephemeral
	I0108 21:22:40.445281       1 shared_informer.go:318] Caches are synced for taint
	I0108 21:22:40.446006       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0108 21:22:40.446077       1 taint_manager.go:210] "Sending events to api server"
	I0108 21:22:40.447688       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0108 21:22:40.448499       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-046839"
	I0108 21:22:40.448690       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0108 21:22:40.448423       1 event.go:307] "Event occurred" object="pause-046839" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-046839 event: Registered Node pause-046839 in Controller"
	I0108 21:22:40.452068       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0108 21:22:40.521282       1 shared_informer.go:318] Caches are synced for resource quota
	I0108 21:22:40.522568       1 shared_informer.go:318] Caches are synced for resource quota
	I0108 21:22:40.538026       1 shared_informer.go:318] Caches are synced for disruption
	I0108 21:22:40.547785       1 shared_informer.go:318] Caches are synced for stateful set
	I0108 21:22:40.948645       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 21:22:40.998977       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 21:22:40.999032       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [e72a200434a99c4c582c1bcf7674f552c1489ca3bad5cc11d51749cd756d7e50] <==
	
	
	==> kube-proxy [289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d] <==
	I0108 21:22:06.376508       1 server_others.go:69] "Using iptables proxy"
	E0108 21:22:06.382639       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-046839": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:07.542069       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-046839": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:09.733352       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-046839": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:14.100807       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-046839": dial tcp 192.168.72.74:8443: connect: connection refused
	
	
	==> kube-proxy [2b1724a578869bd4afc39040fb96805aff8c438683be6ce75fcd1f04eb92d178] <==
	I0108 21:22:28.587625       1 server_others.go:69] "Using iptables proxy"
	I0108 21:22:28.617930       1 node.go:141] Successfully retrieved node IP: 192.168.72.74
	I0108 21:22:28.681181       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:22:28.681313       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:22:28.683708       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:22:28.683813       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:22:28.683992       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:22:28.684030       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:22:28.685631       1 config.go:188] "Starting service config controller"
	I0108 21:22:28.685678       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:22:28.685697       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:22:28.685701       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:22:28.686030       1 config.go:315] "Starting node config controller"
	I0108 21:22:28.686034       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:22:28.786465       1 shared_informer.go:318] Caches are synced for node config
	I0108 21:22:28.786518       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:22:28.786484       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [854f808e04cd67296ae3625cc90f6564702fa4adecef4999912320c5ca5c9533] <==
	I0108 21:22:23.432545       1 serving.go:348] Generated self-signed cert in-memory
	W0108 21:22:27.200258       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 21:22:27.200412       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:22:27.200427       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 21:22:27.200433       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 21:22:27.305329       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0108 21:22:27.305414       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:22:27.308867       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 21:22:27.308992       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:22:27.309616       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0108 21:22:27.309855       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 21:22:27.410126       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9ca96751246c22a1a430c5a792b4130efbeb05e4f62eb3785bb0c3cea59c265d] <==
	E0108 21:22:14.884533       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.74:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:15.300607       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.72.74:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:15.300686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.72.74:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:15.583537       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.72.74:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:15.583681       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.72.74:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:15.877872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.72.74:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:15.878095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.72.74:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:16.195767       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.72.74:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:16.195860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.72.74:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:16.598424       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:16.598468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:16.728374       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:16.728471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:16.792352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.72.74:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:16.792530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.72.74:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:16.926750       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.72.74:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:16.926919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.72.74:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:17.108378       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.72.74:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:17.108459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.72.74:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:17.443779       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:17.443924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:19.383849       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0108 21:22:19.384353       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0108 21:22:19.384490       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0108 21:22:19.384633       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:16:38 UTC, ends at Mon 2024-01-08 21:22:47 UTC. --
	Jan 08 21:22:21 pause-046839 kubelet[3352]: E0108 21:22:21.707017    3352 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.74:8443: connect: connection refused" node="pause-046839"
	Jan 08 21:22:21 pause-046839 kubelet[3352]: W0108 21:22:21.966600    3352 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-046839&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:21 pause-046839 kubelet[3352]: E0108 21:22:21.966698    3352 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-046839&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: W0108 21:22:22.077912    3352 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: E0108 21:22:22.078034    3352 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: W0108 21:22:22.316669    3352 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: E0108 21:22:22.316768    3352 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: W0108 21:22:22.341970    3352 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: E0108 21:22:22.342059    3352 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: E0108 21:22:22.401081    3352 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-046839?timeout=10s\": dial tcp 192.168.72.74:8443: connect: connection refused" interval="1.6s"
	Jan 08 21:22:22 pause-046839 kubelet[3352]: I0108 21:22:22.508902    3352 kubelet_node_status.go:70] "Attempting to register node" node="pause-046839"
	Jan 08 21:22:22 pause-046839 kubelet[3352]: E0108 21:22:22.509469    3352 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.74:8443: connect: connection refused" node="pause-046839"
	Jan 08 21:22:24 pause-046839 kubelet[3352]: I0108 21:22:24.110765    3352 kubelet_node_status.go:70] "Attempting to register node" node="pause-046839"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.347702    3352 kubelet_node_status.go:108] "Node was previously registered" node="pause-046839"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.347832    3352 kubelet_node_status.go:73] "Successfully registered node" node="pause-046839"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.349653    3352 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.350638    3352 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.980117    3352 apiserver.go:52] "Watching apiserver"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.983895    3352 topology_manager.go:215] "Topology Admit Handler" podUID="9af4e26a-25dc-4ac5-b6e3-d2532a643391" podNamespace="kube-system" podName="coredns-5dd5756b68-sqb52"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.984074    3352 topology_manager.go:215] "Topology Admit Handler" podUID="e7615d32-a6f2-461d-b804-930d11feddf3" podNamespace="kube-system" podName="kube-proxy-66j2k"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.996597    3352 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 08 21:22:28 pause-046839 kubelet[3352]: I0108 21:22:28.094001    3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7615d32-a6f2-461d-b804-930d11feddf3-lib-modules\") pod \"kube-proxy-66j2k\" (UID: \"e7615d32-a6f2-461d-b804-930d11feddf3\") " pod="kube-system/kube-proxy-66j2k"
	Jan 08 21:22:28 pause-046839 kubelet[3352]: I0108 21:22:28.094046    3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7615d32-a6f2-461d-b804-930d11feddf3-xtables-lock\") pod \"kube-proxy-66j2k\" (UID: \"e7615d32-a6f2-461d-b804-930d11feddf3\") " pod="kube-system/kube-proxy-66j2k"
	Jan 08 21:22:28 pause-046839 kubelet[3352]: I0108 21:22:28.284654    3352 scope.go:117] "RemoveContainer" containerID="289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d"
	Jan 08 21:22:28 pause-046839 kubelet[3352]: I0108 21:22:28.286811    3352 scope.go:117] "RemoveContainer" containerID="7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-046839 -n pause-046839
helpers_test.go:261: (dbg) Run:  kubectl --context pause-046839 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-046839 -n pause-046839
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-046839 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-046839 logs -n 25: (1.379226303s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-879273                              | old-k8s-version-879273    | jenkins | v1.32.0 | 08 Jan 24 21:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-626488 sudo                            | NoKubernetes-626488       | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC |                     |
	|         | systemctl is-active --quiet                            |                           |         |         |                     |                     |
	|         | service kubelet                                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-626488                                 | NoKubernetes-626488       | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	| start   | -p force-systemd-flag-162170                           | force-systemd-flag-162170 | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-631345                              | running-upgrade-631345    | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	| delete  | -p force-systemd-env-467534                            | force-systemd-env-467534  | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	| start   | -p cert-expiration-001550                              | cert-expiration-001550    | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:16 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p cert-options-686681                                 | cert-options-686681       | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:16 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-162170 ssh cat                      | force-systemd-flag-162170 | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-162170                           | force-systemd-flag-162170 | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:15 UTC |
	| start   | -p pause-046839 --memory=2048                          | pause-046839              | jenkins | v1.32.0 | 08 Jan 24 21:15 UTC | 08 Jan 24 21:17 UTC |
	|         | --install-addons=false                                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                               |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| ssh     | cert-options-686681 ssh                                | cert-options-686681       | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:16 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-686681 -- sudo                         | cert-options-686681       | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:16 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-686681                                 | cert-options-686681       | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:16 UTC |
	| start   | -p no-preload-420119                                   | no-preload-420119         | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-879273             | old-k8s-version-879273    | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-879273                              | old-k8s-version-879273    | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                           |         |         |                     |                     |
	| start   | -p pause-046839                                        | pause-046839              | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC | 08 Jan 24 21:22 UTC |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-420119             | no-preload-420119         | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-420119                                   | no-preload-420119         | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| start   | -p cert-expiration-001550                              | cert-expiration-001550    | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:22 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-420119                  | no-preload-420119         | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-420119                                   | no-preload-420119         | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-001550                              | cert-expiration-001550    | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	| start   | -p embed-certs-930023                                  | embed-certs-930023        | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:22:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:22:24.816132   49818 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:22:24.816270   49818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:22:24.816279   49818 out.go:309] Setting ErrFile to fd 2...
	I0108 21:22:24.816283   49818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:22:24.816479   49818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:22:24.817050   49818 out.go:303] Setting JSON to false
	I0108 21:22:24.817915   49818 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7469,"bootTime":1704741476,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:22:24.817973   49818 start.go:138] virtualization: kvm guest
	I0108 21:22:24.820677   49818 out.go:177] * [embed-certs-930023] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:22:24.822580   49818 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:22:24.822589   49818 notify.go:220] Checking for updates...
	I0108 21:22:24.824272   49818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:22:24.826097   49818 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:22:24.827687   49818 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:22:24.829231   49818 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:22:24.830951   49818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:22:24.833138   49818 config.go:182] Loaded profile config "no-preload-420119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:22:24.833266   49818 config.go:182] Loaded profile config "old-k8s-version-879273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 21:22:24.833394   49818 config.go:182] Loaded profile config "pause-046839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:22:24.833474   49818 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:22:24.870126   49818 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 21:22:24.871501   49818 start.go:298] selected driver: kvm2
	I0108 21:22:24.871517   49818 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:22:24.871530   49818 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:22:24.872252   49818 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:22:24.872347   49818 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:22:24.886741   49818 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:22:24.886788   49818 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 21:22:24.886979   49818 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:22:24.887035   49818 cni.go:84] Creating CNI manager for ""
	I0108 21:22:24.887047   49818 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:22:24.887059   49818 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 21:22:24.887067   49818 start_flags.go:323] config:
	{Name:embed-certs-930023 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-930023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:22:24.887203   49818 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:22:24.889338   49818 out.go:177] * Starting control plane node embed-certs-930023 in cluster embed-certs-930023
	I0108 21:22:23.552495   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:27.230201   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:22:27.230229   48106 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:22:27.230245   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:27.284597   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:22:27.284625   48106 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:22:27.552043   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:27.557416   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:22:27.557448   48106 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:22:28.052004   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:28.057102   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:22:28.057132   48106 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:22:28.552921   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:28.568357   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:22:28.568392   48106 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:22:29.051931   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:29.057212   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 200:
	ok
	I0108 21:22:29.066101   48106 api_server.go:141] control plane version: v1.28.4
	I0108 21:22:29.066140   48106 api_server.go:131] duration metric: took 6.014267249s to wait for apiserver health ...
	I0108 21:22:29.066150   48106 cni.go:84] Creating CNI manager for ""
	I0108 21:22:29.066159   48106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:22:29.068296   48106 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 21:22:24.890813   49818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:22:24.890851   49818 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:22:24.890859   49818 cache.go:56] Caching tarball of preloaded images
	I0108 21:22:24.890935   49818 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:22:24.890945   49818 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:22:24.891032   49818 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/config.json ...
	I0108 21:22:24.891048   49818 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/config.json: {Name:mkdc54aa447c8da5b5aed4fc0de1cc18d12155c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:22:24.891170   49818 start.go:365] acquiring machines lock for embed-certs-930023: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:22:27.260329   49554 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.226:22: connect: no route to host
	I0108 21:22:30.332383   49554 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.226:22: connect: no route to host
	I0108 21:22:29.069844   48106 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 21:22:29.079272   48106 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 21:22:29.097596   48106 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:22:29.110551   48106 system_pods.go:59] 6 kube-system pods found
	I0108 21:22:29.110591   48106 system_pods.go:61] "coredns-5dd5756b68-sqb52" [9af4e26a-25dc-4ac5-b6e3-d2532a643391] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 21:22:29.110607   48106 system_pods.go:61] "etcd-pause-046839" [d2e4d0a0-9053-424f-9758-dda322538df8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:22:29.110618   48106 system_pods.go:61] "kube-apiserver-pause-046839" [6ee06cd7-be94-49f7-9b93-83c2d1fe9629] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 21:22:29.110637   48106 system_pods.go:61] "kube-controller-manager-pause-046839" [b09c7542-31c5-4e44-91a9-5a1989ceb3b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:22:29.110647   48106 system_pods.go:61] "kube-proxy-66j2k" [e7615d32-a6f2-461d-b804-930d11feddf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:22:29.110659   48106 system_pods.go:61] "kube-scheduler-pause-046839" [4e0540b6-7e0b-49c4-b7be-a7ba6269293d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:22:29.110668   48106 system_pods.go:74] duration metric: took 13.051063ms to wait for pod list to return data ...
	I0108 21:22:29.110679   48106 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:22:29.114581   48106 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:22:29.114604   48106 node_conditions.go:123] node cpu capacity is 2
	I0108 21:22:29.114614   48106 node_conditions.go:105] duration metric: took 3.93116ms to run NodePressure ...
	I0108 21:22:29.114633   48106 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:22:29.351408   48106 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 21:22:29.357006   48106 kubeadm.go:787] kubelet initialised
	I0108 21:22:29.357027   48106 kubeadm.go:788] duration metric: took 5.59273ms waiting for restarted kubelet to initialise ...
	I0108 21:22:29.357034   48106 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:22:29.361886   48106 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:29.369614   48106 pod_ready.go:92] pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:29.369637   48106 pod_ready.go:81] duration metric: took 7.721589ms waiting for pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:29.369648   48106 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:31.375804   48106 pod_ready.go:102] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:36.412354   49554 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.226:22: connect: no route to host
	I0108 21:22:33.377871   48106 pod_ready.go:102] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:35.876962   48106 pod_ready.go:102] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:39.484337   49554 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.226:22: connect: no route to host
	I0108 21:22:38.377766   48106 pod_ready.go:102] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:40.876649   48106 pod_ready.go:102] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"False"
	I0108 21:22:42.376910   48106 pod_ready.go:92] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:42.376934   48106 pod_ready.go:81] duration metric: took 13.007276303s waiting for pod "etcd-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.376944   48106 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.383206   48106 pod_ready.go:92] pod "kube-apiserver-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:42.383225   48106 pod_ready.go:81] duration metric: took 6.274766ms waiting for pod "kube-apiserver-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.383233   48106 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.388310   48106 pod_ready.go:92] pod "kube-controller-manager-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:42.388329   48106 pod_ready.go:81] duration metric: took 5.090216ms waiting for pod "kube-controller-manager-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.388337   48106 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-66j2k" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.394273   48106 pod_ready.go:92] pod "kube-proxy-66j2k" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:42.394294   48106 pod_ready.go:81] duration metric: took 5.949412ms waiting for pod "kube-proxy-66j2k" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.394304   48106 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.399516   48106 pod_ready.go:92] pod "kube-scheduler-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:42.399538   48106 pod_ready.go:81] duration metric: took 5.227845ms waiting for pod "kube-scheduler-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:42.399546   48106 pod_ready.go:38] duration metric: took 13.042504384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:22:42.399566   48106 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:22:42.411839   48106 ops.go:34] apiserver oom_adj: -16
	I0108 21:22:42.411864   48106 kubeadm.go:640] restartCluster took 38.460488583s
	I0108 21:22:42.411873   48106 kubeadm.go:406] StartCluster complete in 38.882266124s
	I0108 21:22:42.411892   48106 settings.go:142] acquiring lock: {Name:mk91d3baf51872e4bb0758b94fca7c7249bb9666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:22:42.411980   48106 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:22:42.413263   48106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/kubeconfig: {Name:mkeb2e8a20e31c0c2d5c7e8214a27af3141300ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:22:42.413531   48106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:22:42.413619   48106 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:22:42.413750   48106 config.go:182] Loaded profile config "pause-046839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:22:42.415833   48106 out.go:177] * Enabled addons: 
	I0108 21:22:42.414535   48106 kapi.go:59] client config for pause-046839: &rest.Config{Host:"https://192.168.72.74:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/client.crt", KeyFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/profiles/pause-046839/client.key", CAFile:"/home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:22:42.417417   48106 addons.go:508] enable addons completed in 3.802684ms: enabled=[]
	I0108 21:22:42.420602   48106 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-046839" context rescaled to 1 replicas
	I0108 21:22:42.420632   48106 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:22:42.422488   48106 out.go:177] * Verifying Kubernetes components...
	I0108 21:22:42.423977   48106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:22:42.524732   48106 node_ready.go:35] waiting up to 6m0s for node "pause-046839" to be "Ready" ...
	I0108 21:22:42.524754   48106 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 21:22:42.575242   48106 node_ready.go:49] node "pause-046839" has status "Ready":"True"
	I0108 21:22:42.575265   48106 node_ready.go:38] duration metric: took 50.504316ms waiting for node "pause-046839" to be "Ready" ...
	I0108 21:22:42.575277   48106 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:22:42.776065   48106 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.173324   48106 pod_ready.go:92] pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:43.173346   48106 pod_ready.go:81] duration metric: took 397.256083ms waiting for pod "coredns-5dd5756b68-sqb52" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.173357   48106 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.575700   48106 pod_ready.go:92] pod "etcd-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:43.575726   48106 pod_ready.go:81] duration metric: took 402.362326ms waiting for pod "etcd-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.575738   48106 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.974565   48106 pod_ready.go:92] pod "kube-apiserver-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:43.974587   48106 pod_ready.go:81] duration metric: took 398.842314ms waiting for pod "kube-apiserver-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:43.974598   48106 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:44.374446   48106 pod_ready.go:92] pod "kube-controller-manager-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:44.374468   48106 pod_ready.go:81] duration metric: took 399.863823ms waiting for pod "kube-controller-manager-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:44.374477   48106 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-66j2k" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:44.774007   48106 pod_ready.go:92] pod "kube-proxy-66j2k" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:44.774030   48106 pod_ready.go:81] duration metric: took 399.546593ms waiting for pod "kube-proxy-66j2k" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:44.774038   48106 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:45.174913   48106 pod_ready.go:92] pod "kube-scheduler-pause-046839" in "kube-system" namespace has status "Ready":"True"
	I0108 21:22:45.174936   48106 pod_ready.go:81] duration metric: took 400.891464ms waiting for pod "kube-scheduler-pause-046839" in "kube-system" namespace to be "Ready" ...
	I0108 21:22:45.174943   48106 pod_ready.go:38] duration metric: took 2.599658286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:22:45.174956   48106 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:22:45.175001   48106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:22:45.187531   48106 api_server.go:72] duration metric: took 2.766873053s to wait for apiserver process to appear ...
	I0108 21:22:45.187561   48106 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:22:45.187581   48106 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0108 21:22:45.192625   48106 api_server.go:279] https://192.168.72.74:8443/healthz returned 200:
	ok
	I0108 21:22:45.193919   48106 api_server.go:141] control plane version: v1.28.4
	I0108 21:22:45.193940   48106 api_server.go:131] duration metric: took 6.374571ms to wait for apiserver health ...
	I0108 21:22:45.193948   48106 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:22:45.376592   48106 system_pods.go:59] 6 kube-system pods found
	I0108 21:22:45.376617   48106 system_pods.go:61] "coredns-5dd5756b68-sqb52" [9af4e26a-25dc-4ac5-b6e3-d2532a643391] Running
	I0108 21:22:45.376622   48106 system_pods.go:61] "etcd-pause-046839" [d2e4d0a0-9053-424f-9758-dda322538df8] Running
	I0108 21:22:45.376626   48106 system_pods.go:61] "kube-apiserver-pause-046839" [6ee06cd7-be94-49f7-9b93-83c2d1fe9629] Running
	I0108 21:22:45.376631   48106 system_pods.go:61] "kube-controller-manager-pause-046839" [b09c7542-31c5-4e44-91a9-5a1989ceb3b7] Running
	I0108 21:22:45.376634   48106 system_pods.go:61] "kube-proxy-66j2k" [e7615d32-a6f2-461d-b804-930d11feddf3] Running
	I0108 21:22:45.376638   48106 system_pods.go:61] "kube-scheduler-pause-046839" [4e0540b6-7e0b-49c4-b7be-a7ba6269293d] Running
	I0108 21:22:45.376643   48106 system_pods.go:74] duration metric: took 182.690214ms to wait for pod list to return data ...
	I0108 21:22:45.376650   48106 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:22:45.574664   48106 default_sa.go:45] found service account: "default"
	I0108 21:22:45.574687   48106 default_sa.go:55] duration metric: took 198.031923ms for default service account to be created ...
	I0108 21:22:45.574695   48106 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:22:45.776845   48106 system_pods.go:86] 6 kube-system pods found
	I0108 21:22:45.776870   48106 system_pods.go:89] "coredns-5dd5756b68-sqb52" [9af4e26a-25dc-4ac5-b6e3-d2532a643391] Running
	I0108 21:22:45.776880   48106 system_pods.go:89] "etcd-pause-046839" [d2e4d0a0-9053-424f-9758-dda322538df8] Running
	I0108 21:22:45.776885   48106 system_pods.go:89] "kube-apiserver-pause-046839" [6ee06cd7-be94-49f7-9b93-83c2d1fe9629] Running
	I0108 21:22:45.776889   48106 system_pods.go:89] "kube-controller-manager-pause-046839" [b09c7542-31c5-4e44-91a9-5a1989ceb3b7] Running
	I0108 21:22:45.776893   48106 system_pods.go:89] "kube-proxy-66j2k" [e7615d32-a6f2-461d-b804-930d11feddf3] Running
	I0108 21:22:45.776897   48106 system_pods.go:89] "kube-scheduler-pause-046839" [4e0540b6-7e0b-49c4-b7be-a7ba6269293d] Running
	I0108 21:22:45.776903   48106 system_pods.go:126] duration metric: took 202.2028ms to wait for k8s-apps to be running ...
	I0108 21:22:45.776909   48106 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:22:45.776952   48106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:22:45.790067   48106 system_svc.go:56] duration metric: took 13.144691ms WaitForService to wait for kubelet.
	I0108 21:22:45.790096   48106 kubeadm.go:581] duration metric: took 3.369446487s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:22:45.790113   48106 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:22:45.974030   48106 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:22:45.974058   48106 node_conditions.go:123] node cpu capacity is 2
	I0108 21:22:45.974068   48106 node_conditions.go:105] duration metric: took 183.950628ms to run NodePressure ...
	I0108 21:22:45.974078   48106 start.go:228] waiting for startup goroutines ...
	I0108 21:22:45.974084   48106 start.go:233] waiting for cluster config update ...
	I0108 21:22:45.974090   48106 start.go:242] writing updated cluster config ...
	I0108 21:22:45.974349   48106 ssh_runner.go:195] Run: rm -f paused
	I0108 21:22:46.020658   48106 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:22:46.023280   48106 out.go:177] * Done! kubectl is now configured to use "pause-046839" cluster and "default" namespace by default
	I0108 21:22:45.564358   49554 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.226:22: connect: no route to host
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:16:38 UTC, ends at Mon 2024-01-08 21:22:48 UTC. --
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.826504292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748968826489386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=0c082688-3809-432a-93a6-7b25f16c51d4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.827013835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4e195bbd-098c-4c69-99f1-914f5ec303b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.827065124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4e195bbd-098c-4c69-99f1-914f5ec303b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.827475754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c48fbf35495079c85076c2aa65479dbe2cb5ac1e1d58ee9acc34f2055678c0e,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704748948340818606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1724a578869bd4afc39040fb96805aff8c438683be6ce75fcd1f04eb92d178,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704748948314127462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash: f994b237,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854f808e04cd67296ae3625cc90f6564702fa4adecef4999912320c5ca5c9533,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704748941752570047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e
63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b964404f73a9b54fbea9bd8faae45e3cfb06251333f585bfd94a53d7c1c3c44,PodSandboxId:28def94e5e6c5a6fdbf7d52d844319df917492caced6143cecec795f58f1d9f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704748941726024030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e2670121e812a0e82ffcac2687cf4297c4ec4bce70a4793356c67050f8fd78,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704748941671398790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28835e43ff4219538f6d7e8713bfa65f4ae1b9e736725524f06819d0e2b588a4,PodSandboxId:82513df00f6b76f768764e73433e7a520442919063467bed95463142f7229633,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704748941693482960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04ddc1b5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704748926148337354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash:
f994b237,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704748925243885893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca96751246c22a1a430c5a792b4130efbeb05e4f62eb3785bb0c3cea59c265d,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704748924717132999,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97475594be55bfb2856752c1907c0fbce57563cff435b2a719090fb47f9610e9,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704748924607999578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9f8a0511eee71f9c68209e63bb555b6338af22e6c873c3332b06d7abef9ce,PodSandboxId:4585fde47a7d201514d2f8a3cc15620f0da5ee70c78bc34ff409487ccdba2849,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704748920722532559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04d
dc1b5,},Annotations:map[string]string{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a200434a99c4c582c1bcf7674f552c1489ca3bad5cc11d51749cd756d7e50,PodSandboxId:c550bf6838cea38566320b1acd60f4383347b736c54a52db11608f5d323d317e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1704748920345068243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4e195bbd-098c-4c69-99f1-914f5ec303b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.869843347Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=76a2dd6a-2ada-4e78-a789-06a20f9e76c3 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.869932940Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=76a2dd6a-2ada-4e78-a789-06a20f9e76c3 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.871516826Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=111c9f13-e0b7-4e8b-8912-ca786dd28d8f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.871845966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748968871834004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=111c9f13-e0b7-4e8b-8912-ca786dd28d8f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.872767069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a166d6c-9aef-418d-b45e-e135e704f892 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.872841587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a166d6c-9aef-418d-b45e-e135e704f892 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.873137070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c48fbf35495079c85076c2aa65479dbe2cb5ac1e1d58ee9acc34f2055678c0e,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704748948340818606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1724a578869bd4afc39040fb96805aff8c438683be6ce75fcd1f04eb92d178,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704748948314127462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash: f994b237,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854f808e04cd67296ae3625cc90f6564702fa4adecef4999912320c5ca5c9533,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704748941752570047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e
63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b964404f73a9b54fbea9bd8faae45e3cfb06251333f585bfd94a53d7c1c3c44,PodSandboxId:28def94e5e6c5a6fdbf7d52d844319df917492caced6143cecec795f58f1d9f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704748941726024030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e2670121e812a0e82ffcac2687cf4297c4ec4bce70a4793356c67050f8fd78,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704748941671398790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28835e43ff4219538f6d7e8713bfa65f4ae1b9e736725524f06819d0e2b588a4,PodSandboxId:82513df00f6b76f768764e73433e7a520442919063467bed95463142f7229633,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704748941693482960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04ddc1b5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704748926148337354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash:
f994b237,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704748925243885893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca96751246c22a1a430c5a792b4130efbeb05e4f62eb3785bb0c3cea59c265d,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704748924717132999,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97475594be55bfb2856752c1907c0fbce57563cff435b2a719090fb47f9610e9,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704748924607999578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9f8a0511eee71f9c68209e63bb555b6338af22e6c873c3332b06d7abef9ce,PodSandboxId:4585fde47a7d201514d2f8a3cc15620f0da5ee70c78bc34ff409487ccdba2849,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704748920722532559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04d
dc1b5,},Annotations:map[string]string{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a200434a99c4c582c1bcf7674f552c1489ca3bad5cc11d51749cd756d7e50,PodSandboxId:c550bf6838cea38566320b1acd60f4383347b736c54a52db11608f5d323d317e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1704748920345068243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a166d6c-9aef-418d-b45e-e135e704f892 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.922639950Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=42c1427f-c963-4006-a142-3524bd8ea6a2 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.922732573Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=42c1427f-c963-4006-a142-3524bd8ea6a2 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.924270797Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=213a6a92-86cb-4586-877f-2c6cb73c4564 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.924677929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748968924662931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=213a6a92-86cb-4586-877f-2c6cb73c4564 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.925591804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b3d1f8e8-fab2-4f9c-b72a-713ad8a46a14 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.925678714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b3d1f8e8-fab2-4f9c-b72a-713ad8a46a14 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.925941017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c48fbf35495079c85076c2aa65479dbe2cb5ac1e1d58ee9acc34f2055678c0e,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704748948340818606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1724a578869bd4afc39040fb96805aff8c438683be6ce75fcd1f04eb92d178,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704748948314127462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash: f994b237,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854f808e04cd67296ae3625cc90f6564702fa4adecef4999912320c5ca5c9533,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704748941752570047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e
63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b964404f73a9b54fbea9bd8faae45e3cfb06251333f585bfd94a53d7c1c3c44,PodSandboxId:28def94e5e6c5a6fdbf7d52d844319df917492caced6143cecec795f58f1d9f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704748941726024030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e2670121e812a0e82ffcac2687cf4297c4ec4bce70a4793356c67050f8fd78,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704748941671398790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28835e43ff4219538f6d7e8713bfa65f4ae1b9e736725524f06819d0e2b588a4,PodSandboxId:82513df00f6b76f768764e73433e7a520442919063467bed95463142f7229633,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704748941693482960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04ddc1b5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704748926148337354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash:
f994b237,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704748925243885893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca96751246c22a1a430c5a792b4130efbeb05e4f62eb3785bb0c3cea59c265d,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704748924717132999,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97475594be55bfb2856752c1907c0fbce57563cff435b2a719090fb47f9610e9,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704748924607999578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9f8a0511eee71f9c68209e63bb555b6338af22e6c873c3332b06d7abef9ce,PodSandboxId:4585fde47a7d201514d2f8a3cc15620f0da5ee70c78bc34ff409487ccdba2849,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704748920722532559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04d
dc1b5,},Annotations:map[string]string{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a200434a99c4c582c1bcf7674f552c1489ca3bad5cc11d51749cd756d7e50,PodSandboxId:c550bf6838cea38566320b1acd60f4383347b736c54a52db11608f5d323d317e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1704748920345068243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b3d1f8e8-fab2-4f9c-b72a-713ad8a46a14 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.969771781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f865f749-d4fd-452d-b53f-64fd39cc478d name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.969853224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f865f749-d4fd-452d-b53f-64fd39cc478d name=/runtime.v1.RuntimeService/Version
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.971279388Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=aa9362c5-eaac-4c08-ac65-ce7c5b12cbfd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.971608913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704748968971595521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=aa9362c5-eaac-4c08-ac65-ce7c5b12cbfd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.972120357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f3d844ca-0090-4ea3-9037-27b6201221a3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.972293500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f3d844ca-0090-4ea3-9037-27b6201221a3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:22:48 pause-046839 crio[2457]: time="2024-01-08 21:22:48.972596052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c48fbf35495079c85076c2aa65479dbe2cb5ac1e1d58ee9acc34f2055678c0e,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704748948340818606,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b1724a578869bd4afc39040fb96805aff8c438683be6ce75fcd1f04eb92d178,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704748948314127462,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash: f994b237,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854f808e04cd67296ae3625cc90f6564702fa4adecef4999912320c5ca5c9533,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704748941752570047,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e
63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b964404f73a9b54fbea9bd8faae45e3cfb06251333f585bfd94a53d7c1c3c44,PodSandboxId:28def94e5e6c5a6fdbf7d52d844319df917492caced6143cecec795f58f1d9f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704748941726024030,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3e2670121e812a0e82ffcac2687cf4297c4ec4bce70a4793356c67050f8fd78,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704748941671398790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28835e43ff4219538f6d7e8713bfa65f4ae1b9e736725524f06819d0e2b588a4,PodSandboxId:82513df00f6b76f768764e73433e7a520442919063467bed95463142f7229633,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704748941693482960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04ddc1b5,},Annotations:map[string]s
tring{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d,PodSandboxId:1792283600f96310044cc33cdda0c754f5d42f0c9adbc6ab46fd79f1946e9ad1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_EXITED,CreatedAt:1704748926148337354,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-66j2k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7615d32-a6f2-461d-b804-930d11feddf3,},Annotations:map[string]string{io.kubernetes.container.hash:
f994b237,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb,PodSandboxId:59d85f6aaf4a71375b81c136e6949ddb48db0a44693715dc774334bf20d103e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1704748925243885893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sqb52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9af4e26a-25dc-4ac5-b6e3-d2532a643391,},Annotations:map[string]string{io.kubernetes.container.hash: 6dfc181f,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca96751246c22a1a430c5a792b4130efbeb05e4f62eb3785bb0c3cea59c265d,PodSandboxId:8d2c73e139c4e1dca03cfba968df6f83d61e382b2a34c659c6abf854f4e1bf53,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_EXITED,CreatedAt:1704748924717132999,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sc
heduler-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a2da4e63600709fa82489d9b95374a4,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97475594be55bfb2856752c1907c0fbce57563cff435b2a719090fb47f9610e9,PodSandboxId:41efcad319c141f54a13d5288fcbdc6c3b39b5350f0ba5f31aa4365ee390f925,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1704748924607999578,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046839,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: a8865840bb4af40ab8821712383e85f0,},Annotations:map[string]string{io.kubernetes.container.hash: 4db6ba05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf9f8a0511eee71f9c68209e63bb555b6338af22e6c873c3332b06d7abef9ce,PodSandboxId:4585fde47a7d201514d2f8a3cc15620f0da5ee70c78bc34ff409487ccdba2849,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1704748920722532559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdfee710a88808c56f62745e04d
dc1b5,},Annotations:map[string]string{io.kubernetes.container.hash: ad41967e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72a200434a99c4c582c1bcf7674f552c1489ca3bad5cc11d51749cd756d7e50,PodSandboxId:c550bf6838cea38566320b1acd60f4383347b736c54a52db11608f5d323d317e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1704748920345068243,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4aa22fd3dc6bcd8b1e7ff7d549e3cda8,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f3d844ca-0090-4ea3-9037-27b6201221a3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c48fbf354950       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   20 seconds ago      Running             coredns                   2                   59d85f6aaf4a7       coredns-5dd5756b68-sqb52
	2b1724a578869       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   20 seconds ago      Running             kube-proxy                2                   1792283600f96       kube-proxy-66j2k
	854f808e04cd6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   27 seconds ago      Running             kube-scheduler            2                   8d2c73e139c4e       kube-scheduler-pause-046839
	2b964404f73a9       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   27 seconds ago      Running             kube-controller-manager   2                   28def94e5e6c5       kube-controller-manager-pause-046839
	28835e43ff421       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   27 seconds ago      Running             kube-apiserver            2                   82513df00f6b7       kube-apiserver-pause-046839
	f3e2670121e81       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   27 seconds ago      Running             etcd                      2                   41efcad319c14       etcd-pause-046839
	289e84ca8abe4       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   42 seconds ago      Exited              kube-proxy                1                   1792283600f96       kube-proxy-66j2k
	7984e52ff6bea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   43 seconds ago      Exited              coredns                   1                   59d85f6aaf4a7       coredns-5dd5756b68-sqb52
	9ca96751246c2       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   44 seconds ago      Exited              kube-scheduler            1                   8d2c73e139c4e       kube-scheduler-pause-046839
	97475594be55b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   44 seconds ago      Exited              etcd                      1                   41efcad319c14       etcd-pause-046839
	1cf9f8a0511ee       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   48 seconds ago      Exited              kube-apiserver            1                   4585fde47a7d2       kube-apiserver-pause-046839
	e72a200434a99       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   48 seconds ago      Exited              kube-controller-manager   1                   c550bf6838cea       kube-controller-manager-pause-046839
	
	
	==> coredns [1c48fbf35495079c85076c2aa65479dbe2cb5ac1e1d58ee9acc34f2055678c0e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50914 - 12076 "HINFO IN 7386635121953303377.3056170504652277233. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018503851s
	
	
	==> coredns [7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32838 - 1797 "HINFO IN 4888549285694502217.5742732887607354119. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016783024s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-046839
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-046839
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=pause-046839
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_17_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:17:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-046839
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:22:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:22:27 +0000   Mon, 08 Jan 2024 21:17:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:22:27 +0000   Mon, 08 Jan 2024 21:17:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:22:27 +0000   Mon, 08 Jan 2024 21:17:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:22:27 +0000   Mon, 08 Jan 2024 21:17:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.74
	  Hostname:    pause-046839
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 1529db4c73c643f285f32c36a62fa35a
	  System UUID:                1529db4c-73c6-43f2-85f3-2c36a62fa35a
	  Boot ID:                    d4be94e9-06e9-4b6d-a4ef-533587072111
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-sqb52                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m22s
	  kube-system                 etcd-pause-046839                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m35s
	  kube-system                 kube-apiserver-pause-046839             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 kube-controller-manager-pause-046839    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  kube-system                 kube-proxy-66j2k                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 kube-scheduler-pause-046839             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 5m20s              kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeAllocatableEnforced  5m35s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m35s              kubelet          Node pause-046839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m35s              kubelet          Node pause-046839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m35s              kubelet          Node pause-046839 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m35s              kubelet          Starting kubelet.
	  Normal  NodeReady                5m34s              kubelet          Node pause-046839 status is now: NodeReady
	  Normal  RegisteredNode           5m22s              node-controller  Node pause-046839 event: Registered Node pause-046839 in Controller
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)  kubelet          Node pause-046839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)  kubelet          Node pause-046839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)  kubelet          Node pause-046839 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-046839 event: Registered Node pause-046839 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069487] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.819518] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.701081] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.176357] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.158522] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.459772] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.120782] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.165380] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.131142] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.251629] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[Jan 8 21:17] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +9.797667] systemd-fstab-generator[1272]: Ignoring "noauto" for root device
	[Jan 8 21:21] kauditd_printk_skb: 23 callbacks suppressed
	[  +0.575826] systemd-fstab-generator[2242]: Ignoring "noauto" for root device
	[  +0.283362] systemd-fstab-generator[2263]: Ignoring "noauto" for root device
	[Jan 8 21:22] systemd-fstab-generator[2306]: Ignoring "noauto" for root device
	[  +0.304706] systemd-fstab-generator[2334]: Ignoring "noauto" for root device
	[  +0.336576] systemd-fstab-generator[2357]: Ignoring "noauto" for root device
	[ +19.371588] systemd-fstab-generator[3346]: Ignoring "noauto" for root device
	[  +8.126258] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [97475594be55bfb2856752c1907c0fbce57563cff435b2a719090fb47f9610e9] <==
	{"level":"info","ts":"2024-01-08T21:22:05.867455Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.74:2380"}
	{"level":"info","ts":"2024-01-08T21:22:06.933304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-08T21:22:06.933414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-08T21:22:06.933475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 received MsgPreVoteResp from 95b23b111ac7b7c0 at term 2"}
	{"level":"info","ts":"2024-01-08T21:22:06.933514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became candidate at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:06.933542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 received MsgVoteResp from 95b23b111ac7b7c0 at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:06.933571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became leader at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:06.933604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 95b23b111ac7b7c0 elected leader 95b23b111ac7b7c0 at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:06.936481Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"95b23b111ac7b7c0","local-member-attributes":"{Name:pause-046839 ClientURLs:[https://192.168.72.74:2379]}","request-path":"/0/members/95b23b111ac7b7c0/attributes","cluster-id":"62ce46a6f5a5249c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:22:06.936729Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:22:06.939289Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:22:06.939436Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:22:06.940649Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.74:2379"}
	{"level":"info","ts":"2024-01-08T21:22:06.946155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:22:06.94634Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:22:19.54815Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-01-08T21:22:19.548378Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-046839","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.74:2380"],"advertise-client-urls":["https://192.168.72.74:2379"]}
	{"level":"warn","ts":"2024-01-08T21:22:19.548646Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-08T21:22:19.548717Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-08T21:22:19.550629Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.74:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-01-08T21:22:19.550683Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.74:2379: use of closed network connection"}
	{"level":"info","ts":"2024-01-08T21:22:19.55074Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"95b23b111ac7b7c0","current-leader-member-id":"95b23b111ac7b7c0"}
	{"level":"info","ts":"2024-01-08T21:22:19.555564Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.72.74:2380"}
	{"level":"info","ts":"2024-01-08T21:22:19.555738Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.72.74:2380"}
	{"level":"info","ts":"2024-01-08T21:22:19.555771Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-046839","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.74:2380"],"advertise-client-urls":["https://192.168.72.74:2379"]}
	
	
	==> etcd [f3e2670121e812a0e82ffcac2687cf4297c4ec4bce70a4793356c67050f8fd78] <==
	{"level":"info","ts":"2024-01-08T21:22:24.166643Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:22:24.166676Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-08T21:22:24.166527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 switched to configuration voters=(10786749002155538368)"}
	{"level":"info","ts":"2024-01-08T21:22:24.167746Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"62ce46a6f5a5249c","local-member-id":"95b23b111ac7b7c0","added-peer-id":"95b23b111ac7b7c0","added-peer-peer-urls":["https://192.168.72.74:2380"]}
	{"level":"info","ts":"2024-01-08T21:22:24.17309Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-08T21:22:24.174743Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"95b23b111ac7b7c0","initial-advertise-peer-urls":["https://192.168.72.74:2380"],"listen-peer-urls":["https://192.168.72.74:2380"],"advertise-client-urls":["https://192.168.72.74:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.74:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-08T21:22:24.174879Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T21:22:24.173352Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"62ce46a6f5a5249c","local-member-id":"95b23b111ac7b7c0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:22:24.175143Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:22:24.173594Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.74:2380"}
	{"level":"info","ts":"2024-01-08T21:22:24.183272Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.74:2380"}
	{"level":"info","ts":"2024-01-08T21:22:25.809724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 is starting a new election at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:25.809799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:25.809834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 received MsgPreVoteResp from 95b23b111ac7b7c0 at term 3"}
	{"level":"info","ts":"2024-01-08T21:22:25.809849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became candidate at term 4"}
	{"level":"info","ts":"2024-01-08T21:22:25.809854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 received MsgVoteResp from 95b23b111ac7b7c0 at term 4"}
	{"level":"info","ts":"2024-01-08T21:22:25.809862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95b23b111ac7b7c0 became leader at term 4"}
	{"level":"info","ts":"2024-01-08T21:22:25.809886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 95b23b111ac7b7c0 elected leader 95b23b111ac7b7c0 at term 4"}
	{"level":"info","ts":"2024-01-08T21:22:25.815657Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"95b23b111ac7b7c0","local-member-attributes":"{Name:pause-046839 ClientURLs:[https://192.168.72.74:2379]}","request-path":"/0/members/95b23b111ac7b7c0/attributes","cluster-id":"62ce46a6f5a5249c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:22:25.815676Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:22:25.8159Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:22:25.815943Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:22:25.815695Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:22:25.816994Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:22:25.817274Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.74:2379"}
	
	
	==> kernel <==
	 21:22:49 up 6 min,  0 users,  load average: 1.33, 0.70, 0.32
	Linux pause-046839 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [1cf9f8a0511eee71f9c68209e63bb555b6338af22e6c873c3332b06d7abef9ce] <==
	
	
	==> kube-apiserver [28835e43ff4219538f6d7e8713bfa65f4ae1b9e736725524f06819d0e2b588a4] <==
	I0108 21:22:27.274689       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0108 21:22:27.274704       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0108 21:22:27.274773       1 controller.go:116] Starting legacy_token_tracking_controller
	I0108 21:22:27.274804       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0108 21:22:27.295408       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:22:27.352819       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0108 21:22:27.374054       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0108 21:22:27.374163       1 aggregator.go:166] initial CRD sync complete...
	I0108 21:22:27.374289       1 autoregister_controller.go:141] Starting autoregister controller
	I0108 21:22:27.374320       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0108 21:22:27.374345       1 cache.go:39] Caches are synced for autoregister controller
	I0108 21:22:27.375708       1 shared_informer.go:318] Caches are synced for configmaps
	I0108 21:22:27.387316       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0108 21:22:27.387395       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0108 21:22:27.392095       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:22:27.392332       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:22:27.395272       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0108 21:22:28.191435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:22:29.210303       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 21:22:29.221775       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 21:22:29.281864       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 21:22:29.329320       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:22:29.339626       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:22:40.567770       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:22:40.616804       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2b964404f73a9b54fbea9bd8faae45e3cfb06251333f585bfd94a53d7c1c3c44] <==
	I0108 21:22:40.413569       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0108 21:22:40.414091       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0108 21:22:40.414367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="231.198µs"
	I0108 21:22:40.414701       1 shared_informer.go:318] Caches are synced for endpoint
	I0108 21:22:40.416268       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0108 21:22:40.416313       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0108 21:22:40.419040       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0108 21:22:40.419090       1 shared_informer.go:318] Caches are synced for namespace
	I0108 21:22:40.419606       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0108 21:22:40.431621       1 shared_informer.go:318] Caches are synced for ephemeral
	I0108 21:22:40.445281       1 shared_informer.go:318] Caches are synced for taint
	I0108 21:22:40.446006       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0108 21:22:40.446077       1 taint_manager.go:210] "Sending events to api server"
	I0108 21:22:40.447688       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0108 21:22:40.448499       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-046839"
	I0108 21:22:40.448690       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0108 21:22:40.448423       1 event.go:307] "Event occurred" object="pause-046839" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-046839 event: Registered Node pause-046839 in Controller"
	I0108 21:22:40.452068       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0108 21:22:40.521282       1 shared_informer.go:318] Caches are synced for resource quota
	I0108 21:22:40.522568       1 shared_informer.go:318] Caches are synced for resource quota
	I0108 21:22:40.538026       1 shared_informer.go:318] Caches are synced for disruption
	I0108 21:22:40.547785       1 shared_informer.go:318] Caches are synced for stateful set
	I0108 21:22:40.948645       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 21:22:40.998977       1 shared_informer.go:318] Caches are synced for garbage collector
	I0108 21:22:40.999032       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [e72a200434a99c4c582c1bcf7674f552c1489ca3bad5cc11d51749cd756d7e50] <==
	
	
	==> kube-proxy [289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d] <==
	I0108 21:22:06.376508       1 server_others.go:69] "Using iptables proxy"
	E0108 21:22:06.382639       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-046839": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:07.542069       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-046839": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:09.733352       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-046839": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:14.100807       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-046839": dial tcp 192.168.72.74:8443: connect: connection refused
	
	
	==> kube-proxy [2b1724a578869bd4afc39040fb96805aff8c438683be6ce75fcd1f04eb92d178] <==
	I0108 21:22:28.587625       1 server_others.go:69] "Using iptables proxy"
	I0108 21:22:28.617930       1 node.go:141] Successfully retrieved node IP: 192.168.72.74
	I0108 21:22:28.681181       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:22:28.681313       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:22:28.683708       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:22:28.683813       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:22:28.683992       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:22:28.684030       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:22:28.685631       1 config.go:188] "Starting service config controller"
	I0108 21:22:28.685678       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:22:28.685697       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:22:28.685701       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:22:28.686030       1 config.go:315] "Starting node config controller"
	I0108 21:22:28.686034       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:22:28.786465       1 shared_informer.go:318] Caches are synced for node config
	I0108 21:22:28.786518       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:22:28.786484       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [854f808e04cd67296ae3625cc90f6564702fa4adecef4999912320c5ca5c9533] <==
	I0108 21:22:23.432545       1 serving.go:348] Generated self-signed cert in-memory
	W0108 21:22:27.200258       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 21:22:27.200412       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:22:27.200427       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 21:22:27.200433       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 21:22:27.305329       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0108 21:22:27.305414       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:22:27.308867       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 21:22:27.308992       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:22:27.309616       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0108 21:22:27.309855       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 21:22:27.410126       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9ca96751246c22a1a430c5a792b4130efbeb05e4f62eb3785bb0c3cea59c265d] <==
	E0108 21:22:14.884533       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.74:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:15.300607       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.72.74:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:15.300686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.72.74:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:15.583537       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.72.74:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:15.583681       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.72.74:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:15.877872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.72.74:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:15.878095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.72.74:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:16.195767       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.72.74:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:16.195860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.72.74:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:16.598424       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:16.598468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:16.728374       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:16.728471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:16.792352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.72.74:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:16.792530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.72.74:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:16.926750       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.72.74:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:16.926919       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.72.74:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:17.108378       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.72.74:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:17.108459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.72.74:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	W0108 21:22:17.443779       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:17.443924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.72.74:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	E0108 21:22:19.383849       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0108 21:22:19.384353       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0108 21:22:19.384490       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0108 21:22:19.384633       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:16:38 UTC, ends at Mon 2024-01-08 21:22:49 UTC. --
	Jan 08 21:22:21 pause-046839 kubelet[3352]: E0108 21:22:21.707017    3352 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.74:8443: connect: connection refused" node="pause-046839"
	Jan 08 21:22:21 pause-046839 kubelet[3352]: W0108 21:22:21.966600    3352 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-046839&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:21 pause-046839 kubelet[3352]: E0108 21:22:21.966698    3352 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-046839&limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: W0108 21:22:22.077912    3352 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: E0108 21:22:22.078034    3352 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: W0108 21:22:22.316669    3352 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: E0108 21:22:22.316768    3352 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: W0108 21:22:22.341970    3352 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: E0108 21:22:22.342059    3352 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.74:8443: connect: connection refused
	Jan 08 21:22:22 pause-046839 kubelet[3352]: E0108 21:22:22.401081    3352 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-046839?timeout=10s\": dial tcp 192.168.72.74:8443: connect: connection refused" interval="1.6s"
	Jan 08 21:22:22 pause-046839 kubelet[3352]: I0108 21:22:22.508902    3352 kubelet_node_status.go:70] "Attempting to register node" node="pause-046839"
	Jan 08 21:22:22 pause-046839 kubelet[3352]: E0108 21:22:22.509469    3352 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.74:8443: connect: connection refused" node="pause-046839"
	Jan 08 21:22:24 pause-046839 kubelet[3352]: I0108 21:22:24.110765    3352 kubelet_node_status.go:70] "Attempting to register node" node="pause-046839"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.347702    3352 kubelet_node_status.go:108] "Node was previously registered" node="pause-046839"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.347832    3352 kubelet_node_status.go:73] "Successfully registered node" node="pause-046839"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.349653    3352 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.350638    3352 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.980117    3352 apiserver.go:52] "Watching apiserver"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.983895    3352 topology_manager.go:215] "Topology Admit Handler" podUID="9af4e26a-25dc-4ac5-b6e3-d2532a643391" podNamespace="kube-system" podName="coredns-5dd5756b68-sqb52"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.984074    3352 topology_manager.go:215] "Topology Admit Handler" podUID="e7615d32-a6f2-461d-b804-930d11feddf3" podNamespace="kube-system" podName="kube-proxy-66j2k"
	Jan 08 21:22:27 pause-046839 kubelet[3352]: I0108 21:22:27.996597    3352 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 08 21:22:28 pause-046839 kubelet[3352]: I0108 21:22:28.094001    3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7615d32-a6f2-461d-b804-930d11feddf3-lib-modules\") pod \"kube-proxy-66j2k\" (UID: \"e7615d32-a6f2-461d-b804-930d11feddf3\") " pod="kube-system/kube-proxy-66j2k"
	Jan 08 21:22:28 pause-046839 kubelet[3352]: I0108 21:22:28.094046    3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7615d32-a6f2-461d-b804-930d11feddf3-xtables-lock\") pod \"kube-proxy-66j2k\" (UID: \"e7615d32-a6f2-461d-b804-930d11feddf3\") " pod="kube-system/kube-proxy-66j2k"
	Jan 08 21:22:28 pause-046839 kubelet[3352]: I0108 21:22:28.284654    3352 scope.go:117] "RemoveContainer" containerID="289e84ca8abe46c26b95fe6ce12c1fa257cb0070a199e545506a338a86a4e30d"
	Jan 08 21:22:28 pause-046839 kubelet[3352]: I0108 21:22:28.286811    3352 scope.go:117] "RemoveContainer" containerID="7984e52ff6beab5155133649d1aac1cdfcf1fc78c0d3a3d143109be96fcfe8eb"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-046839 -n pause-046839
helpers_test.go:261: (dbg) Run:  kubectl --context pause-046839 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (317.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-420119 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-420119 --alsologtostderr -v=3: exit status 82 (2m0.89198005s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-420119"  ...
	* Stopping node "no-preload-420119"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:19:20.080621   49012 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:19:20.080776   49012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:19:20.080787   49012 out.go:309] Setting ErrFile to fd 2...
	I0108 21:19:20.080791   49012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:19:20.081018   49012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:19:20.081272   49012 out.go:303] Setting JSON to false
	I0108 21:19:20.081364   49012 mustload.go:65] Loading cluster: no-preload-420119
	I0108 21:19:20.081744   49012 config.go:182] Loaded profile config "no-preload-420119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:19:20.081838   49012 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/config.json ...
	I0108 21:19:20.082011   49012 mustload.go:65] Loading cluster: no-preload-420119
	I0108 21:19:20.082142   49012 config.go:182] Loaded profile config "no-preload-420119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:19:20.082184   49012 stop.go:39] StopHost: no-preload-420119
	I0108 21:19:20.082587   49012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:19:20.082629   49012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:19:20.097170   49012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0108 21:19:20.097704   49012 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:19:20.098380   49012 main.go:141] libmachine: Using API Version  1
	I0108 21:19:20.098409   49012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:19:20.098841   49012 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:19:20.101413   49012 out.go:177] * Stopping node "no-preload-420119"  ...
	I0108 21:19:20.103214   49012 main.go:141] libmachine: Stopping "no-preload-420119"...
	I0108 21:19:20.103246   49012 main.go:141] libmachine: (no-preload-420119) Calling .GetState
	I0108 21:19:20.105152   49012 main.go:141] libmachine: (no-preload-420119) Calling .Stop
	I0108 21:19:20.108650   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 0/60
	I0108 21:19:21.110713   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 1/60
	I0108 21:19:22.112347   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 2/60
	I0108 21:19:23.113828   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 3/60
	I0108 21:19:24.115266   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 4/60
	I0108 21:19:25.117882   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 5/60
	I0108 21:19:26.119955   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 6/60
	I0108 21:19:27.121590   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 7/60
	I0108 21:19:28.123074   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 8/60
	I0108 21:19:29.124680   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 9/60
	I0108 21:19:30.126782   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 10/60
	I0108 21:19:31.128672   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 11/60
	I0108 21:19:32.130259   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 12/60
	I0108 21:19:33.131740   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 13/60
	I0108 21:19:34.133144   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 14/60
	I0108 21:19:35.135228   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 15/60
	I0108 21:19:36.136774   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 16/60
	I0108 21:19:37.138774   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 17/60
	I0108 21:19:38.140153   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 18/60
	I0108 21:19:39.141604   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 19/60
	I0108 21:19:40.143924   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 20/60
	I0108 21:19:41.145266   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 21/60
	I0108 21:19:42.146860   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 22/60
	I0108 21:19:43.148271   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 23/60
	I0108 21:19:44.149915   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 24/60
	I0108 21:19:45.151972   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 25/60
	I0108 21:19:46.153466   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 26/60
	I0108 21:19:47.155172   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 27/60
	I0108 21:19:48.157076   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 28/60
	I0108 21:19:49.158496   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 29/60
	I0108 21:19:50.160827   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 30/60
	I0108 21:19:51.162411   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 31/60
	I0108 21:19:52.163845   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 32/60
	I0108 21:19:53.165516   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 33/60
	I0108 21:19:54.166776   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 34/60
	I0108 21:19:55.168797   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 35/60
	I0108 21:19:56.170047   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 36/60
	I0108 21:19:57.171512   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 37/60
	I0108 21:19:58.172848   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 38/60
	I0108 21:19:59.174196   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 39/60
	I0108 21:20:00.175532   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 40/60
	I0108 21:20:01.176926   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 41/60
	I0108 21:20:02.178304   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 42/60
	I0108 21:20:03.179501   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 43/60
	I0108 21:20:04.180808   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 44/60
	I0108 21:20:05.182969   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 45/60
	I0108 21:20:06.184504   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 46/60
	I0108 21:20:07.185907   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 47/60
	I0108 21:20:08.187198   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 48/60
	I0108 21:20:09.188842   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 49/60
	I0108 21:20:10.190103   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 50/60
	I0108 21:20:11.191905   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 51/60
	I0108 21:20:12.193257   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 52/60
	I0108 21:20:13.194725   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 53/60
	I0108 21:20:14.196600   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 54/60
	I0108 21:20:15.198422   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 55/60
	I0108 21:20:16.200169   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 56/60
	I0108 21:20:17.201578   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 57/60
	I0108 21:20:18.203445   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 58/60
	I0108 21:20:19.204928   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 59/60
	I0108 21:20:20.205890   49012 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 21:20:20.205933   49012 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:20:20.205951   49012 retry.go:31] will retry after 570.248881ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:20:20.776652   49012 stop.go:39] StopHost: no-preload-420119
	I0108 21:20:20.777034   49012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:20:20.777111   49012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:20:20.791516   49012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0108 21:20:20.791961   49012 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:20:20.792462   49012 main.go:141] libmachine: Using API Version  1
	I0108 21:20:20.792484   49012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:20:20.792812   49012 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:20:20.795463   49012 out.go:177] * Stopping node "no-preload-420119"  ...
	I0108 21:20:20.797419   49012 main.go:141] libmachine: Stopping "no-preload-420119"...
	I0108 21:20:20.797442   49012 main.go:141] libmachine: (no-preload-420119) Calling .GetState
	I0108 21:20:20.799269   49012 main.go:141] libmachine: (no-preload-420119) Calling .Stop
	I0108 21:20:20.803223   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 0/60
	I0108 21:20:21.804701   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 1/60
	I0108 21:20:22.806168   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 2/60
	I0108 21:20:23.807776   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 3/60
	I0108 21:20:24.809649   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 4/60
	I0108 21:20:25.811090   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 5/60
	I0108 21:20:26.812538   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 6/60
	I0108 21:20:27.813856   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 7/60
	I0108 21:20:28.815505   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 8/60
	I0108 21:20:29.816952   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 9/60
	I0108 21:20:30.818862   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 10/60
	I0108 21:20:31.820524   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 11/60
	I0108 21:20:32.823045   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 12/60
	I0108 21:20:33.825434   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 13/60
	I0108 21:20:34.826863   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 14/60
	I0108 21:20:35.828361   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 15/60
	I0108 21:20:36.830803   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 16/60
	I0108 21:20:37.832215   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 17/60
	I0108 21:20:38.834982   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 18/60
	I0108 21:20:39.836466   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 19/60
	I0108 21:20:40.838161   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 20/60
	I0108 21:20:41.839725   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 21/60
	I0108 21:20:42.841155   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 22/60
	I0108 21:20:43.842556   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 23/60
	I0108 21:20:44.843982   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 24/60
	I0108 21:20:45.845636   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 25/60
	I0108 21:20:46.846997   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 26/60
	I0108 21:20:47.848773   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 27/60
	I0108 21:20:48.850072   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 28/60
	I0108 21:20:49.851249   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 29/60
	I0108 21:20:50.853120   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 30/60
	I0108 21:20:51.854434   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 31/60
	I0108 21:20:52.856048   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 32/60
	I0108 21:20:53.858055   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 33/60
	I0108 21:20:54.860144   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 34/60
	I0108 21:20:55.862561   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 35/60
	I0108 21:20:56.864326   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 36/60
	I0108 21:20:57.865996   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 37/60
	I0108 21:20:58.867637   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 38/60
	I0108 21:20:59.869016   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 39/60
	I0108 21:21:00.870817   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 40/60
	I0108 21:21:01.872177   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 41/60
	I0108 21:21:02.873439   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 42/60
	I0108 21:21:03.874873   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 43/60
	I0108 21:21:04.876270   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 44/60
	I0108 21:21:05.877866   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 45/60
	I0108 21:21:06.879336   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 46/60
	I0108 21:21:07.880889   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 47/60
	I0108 21:21:08.882379   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 48/60
	I0108 21:21:09.883852   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 49/60
	I0108 21:21:10.885919   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 50/60
	I0108 21:21:11.888243   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 51/60
	I0108 21:21:12.890654   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 52/60
	I0108 21:21:13.892722   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 53/60
	I0108 21:21:14.894048   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 54/60
	I0108 21:21:15.895713   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 55/60
	I0108 21:21:16.897118   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 56/60
	I0108 21:21:17.899493   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 57/60
	I0108 21:21:18.901076   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 58/60
	I0108 21:21:19.903159   49012 main.go:141] libmachine: (no-preload-420119) Waiting for machine to stop 59/60
	I0108 21:21:20.904202   49012 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 21:21:20.904254   49012 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:21:20.906327   49012 out.go:177] 
	W0108 21:21:20.908040   49012 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0108 21:21:20.908068   49012 out.go:239] * 
	* 
	W0108 21:21:20.910903   49012 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:21:20.912487   49012 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-420119 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-420119 -n no-preload-420119
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-420119 -n no-preload-420119: exit status 3 (18.474589504s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:21:39.388370   49380 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.226:22: connect: no route to host
	E0108 21:21:39.388389   49380 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.226:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-420119" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-420119 -n no-preload-420119
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-420119 -n no-preload-420119: exit status 3 (3.199421558s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:21:42.588470   49454 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.226:22: connect: no route to host
	E0108 21:21:42.588490   49454 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.226:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-420119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-420119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.158000352s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.83.226:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-420119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-420119 -n no-preload-420119
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-420119 -n no-preload-420119: exit status 3 (3.057501207s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:21:51.804406   49524 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.226:22: connect: no route to host
	E0108 21:21:51.804425   49524 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.226:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-420119" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-930023 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-930023 --alsologtostderr -v=3: exit status 82 (2m1.035069136s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-930023"  ...
	* Stopping node "embed-certs-930023"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:28:19.862076   51608 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:28:19.862217   51608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:28:19.862229   51608 out.go:309] Setting ErrFile to fd 2...
	I0108 21:28:19.862236   51608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:28:19.862564   51608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:28:19.862902   51608 out.go:303] Setting JSON to false
	I0108 21:28:19.863011   51608 mustload.go:65] Loading cluster: embed-certs-930023
	I0108 21:28:19.863567   51608 config.go:182] Loaded profile config "embed-certs-930023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:28:19.863681   51608 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/config.json ...
	I0108 21:28:19.863921   51608 mustload.go:65] Loading cluster: embed-certs-930023
	I0108 21:28:19.864087   51608 config.go:182] Loaded profile config "embed-certs-930023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:28:19.864146   51608 stop.go:39] StopHost: embed-certs-930023
	I0108 21:28:19.864820   51608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:28:19.864873   51608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:28:19.882294   51608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0108 21:28:19.882792   51608 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:28:19.883517   51608 main.go:141] libmachine: Using API Version  1
	I0108 21:28:19.883550   51608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:28:19.884048   51608 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:28:19.886917   51608 out.go:177] * Stopping node "embed-certs-930023"  ...
	I0108 21:28:19.888358   51608 main.go:141] libmachine: Stopping "embed-certs-930023"...
	I0108 21:28:19.888398   51608 main.go:141] libmachine: (embed-certs-930023) Calling .GetState
	I0108 21:28:19.890452   51608 main.go:141] libmachine: (embed-certs-930023) Calling .Stop
	I0108 21:28:19.894995   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 0/60
	I0108 21:28:20.896569   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 1/60
	I0108 21:28:21.898928   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 2/60
	I0108 21:28:22.901929   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 3/60
	I0108 21:28:23.903424   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 4/60
	I0108 21:28:24.906414   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 5/60
	I0108 21:28:25.907991   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 6/60
	I0108 21:28:26.910395   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 7/60
	I0108 21:28:27.912975   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 8/60
	I0108 21:28:28.915316   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 9/60
	I0108 21:28:29.917054   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 10/60
	I0108 21:28:30.919968   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 11/60
	I0108 21:28:31.921275   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 12/60
	I0108 21:28:32.924513   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 13/60
	I0108 21:28:33.926537   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 14/60
	I0108 21:28:34.928731   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 15/60
	I0108 21:28:35.930054   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 16/60
	I0108 21:28:36.932192   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 17/60
	I0108 21:28:37.933696   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 18/60
	I0108 21:28:38.935039   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 19/60
	I0108 21:28:39.936831   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 20/60
	I0108 21:28:40.938635   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 21/60
	I0108 21:28:41.940548   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 22/60
	I0108 21:28:42.942358   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 23/60
	I0108 21:28:43.943944   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 24/60
	I0108 21:28:44.945951   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 25/60
	I0108 21:28:45.947275   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 26/60
	I0108 21:28:46.949458   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 27/60
	I0108 21:28:47.951265   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 28/60
	I0108 21:28:48.952888   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 29/60
	I0108 21:28:49.955240   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 30/60
	I0108 21:28:50.957182   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 31/60
	I0108 21:28:51.958711   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 32/60
	I0108 21:28:52.960162   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 33/60
	I0108 21:28:53.962279   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 34/60
	I0108 21:28:54.964357   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 35/60
	I0108 21:28:55.965901   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 36/60
	I0108 21:28:56.967346   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 37/60
	I0108 21:28:57.969009   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 38/60
	I0108 21:28:58.970457   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 39/60
	I0108 21:28:59.972721   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 40/60
	I0108 21:29:00.974765   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 41/60
	I0108 21:29:01.976852   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 42/60
	I0108 21:29:02.978957   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 43/60
	I0108 21:29:03.980936   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 44/60
	I0108 21:29:04.982892   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 45/60
	I0108 21:29:05.984036   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 46/60
	I0108 21:29:06.986030   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 47/60
	I0108 21:29:07.988474   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 48/60
	I0108 21:29:08.990758   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 49/60
	I0108 21:29:09.992660   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 50/60
	I0108 21:29:10.995191   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 51/60
	I0108 21:29:11.997037   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 52/60
	I0108 21:29:12.998597   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 53/60
	I0108 21:29:13.999789   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 54/60
	I0108 21:29:15.001877   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 55/60
	I0108 21:29:16.004025   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 56/60
	I0108 21:29:17.005626   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 57/60
	I0108 21:29:18.006992   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 58/60
	I0108 21:29:19.008430   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 59/60
	I0108 21:29:20.009928   51608 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 21:29:20.009977   51608 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:29:20.010006   51608 retry.go:31] will retry after 686.804683ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:29:20.697280   51608 stop.go:39] StopHost: embed-certs-930023
	I0108 21:29:20.697644   51608 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:29:20.697719   51608 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:29:20.713311   51608 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37655
	I0108 21:29:20.713777   51608 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:29:20.714266   51608 main.go:141] libmachine: Using API Version  1
	I0108 21:29:20.714301   51608 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:29:20.714653   51608 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:29:20.717224   51608 out.go:177] * Stopping node "embed-certs-930023"  ...
	I0108 21:29:20.719486   51608 main.go:141] libmachine: Stopping "embed-certs-930023"...
	I0108 21:29:20.719518   51608 main.go:141] libmachine: (embed-certs-930023) Calling .GetState
	I0108 21:29:20.721437   51608 main.go:141] libmachine: (embed-certs-930023) Calling .Stop
	I0108 21:29:20.725428   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 0/60
	I0108 21:29:21.727562   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 1/60
	I0108 21:29:22.729172   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 2/60
	I0108 21:29:23.730735   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 3/60
	I0108 21:29:24.732413   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 4/60
	I0108 21:29:25.734139   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 5/60
	I0108 21:29:26.735617   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 6/60
	I0108 21:29:27.737855   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 7/60
	I0108 21:29:28.739249   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 8/60
	I0108 21:29:29.740847   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 9/60
	I0108 21:29:30.742073   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 10/60
	I0108 21:29:31.743361   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 11/60
	I0108 21:29:32.744921   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 12/60
	I0108 21:29:33.746524   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 13/60
	I0108 21:29:34.748195   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 14/60
	I0108 21:29:35.749852   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 15/60
	I0108 21:29:36.751142   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 16/60
	I0108 21:29:37.752616   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 17/60
	I0108 21:29:38.754132   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 18/60
	I0108 21:29:39.756509   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 19/60
	I0108 21:29:40.758285   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 20/60
	I0108 21:29:41.759692   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 21/60
	I0108 21:29:42.761437   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 22/60
	I0108 21:29:43.762984   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 23/60
	I0108 21:29:44.764347   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 24/60
	I0108 21:29:45.766604   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 25/60
	I0108 21:29:46.768250   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 26/60
	I0108 21:29:47.769640   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 27/60
	I0108 21:29:48.771043   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 28/60
	I0108 21:29:49.772356   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 29/60
	I0108 21:29:50.774265   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 30/60
	I0108 21:29:51.776329   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 31/60
	I0108 21:29:52.777793   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 32/60
	I0108 21:29:53.779177   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 33/60
	I0108 21:29:54.780659   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 34/60
	I0108 21:29:55.782926   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 35/60
	I0108 21:29:56.784479   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 36/60
	I0108 21:29:57.786022   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 37/60
	I0108 21:29:58.787457   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 38/60
	I0108 21:29:59.788896   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 39/60
	I0108 21:30:00.791060   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 40/60
	I0108 21:30:01.792751   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 41/60
	I0108 21:30:02.794263   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 42/60
	I0108 21:30:03.796832   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 43/60
	I0108 21:30:04.798769   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 44/60
	I0108 21:30:05.801160   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 45/60
	I0108 21:30:06.802816   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 46/60
	I0108 21:30:07.804223   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 47/60
	I0108 21:30:08.805790   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 48/60
	I0108 21:30:09.807333   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 49/60
	I0108 21:30:10.809337   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 50/60
	I0108 21:30:11.810853   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 51/60
	I0108 21:30:12.812247   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 52/60
	I0108 21:30:13.813651   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 53/60
	I0108 21:30:14.814960   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 54/60
	I0108 21:30:15.816727   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 55/60
	I0108 21:30:16.818757   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 56/60
	I0108 21:30:17.820536   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 57/60
	I0108 21:30:18.822137   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 58/60
	I0108 21:30:19.823660   51608 main.go:141] libmachine: (embed-certs-930023) Waiting for machine to stop 59/60
	I0108 21:30:20.824684   51608 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 21:30:20.824741   51608 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:30:20.827022   51608 out.go:177] 
	W0108 21:30:20.829009   51608 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0108 21:30:20.829035   51608 out.go:239] * 
	* 
	W0108 21:30:20.831415   51608 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:30:20.833380   51608 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-930023 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-930023 -n embed-certs-930023
E0108 21:30:36.430076   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-930023 -n embed-certs-930023: exit status 3 (18.456415808s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:30:39.292398   52053 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0108 21:30:39.292421   52053 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-930023" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-690577 --alsologtostderr -v=3
E0108 21:29:26.820053   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-690577 --alsologtostderr -v=3: exit status 82 (2m1.125416406s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-690577"  ...
	* Stopping node "default-k8s-diff-port-690577"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:29:22.021902   51887 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:29:22.022076   51887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:29:22.022090   51887 out.go:309] Setting ErrFile to fd 2...
	I0108 21:29:22.022097   51887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:29:22.022434   51887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:29:22.022765   51887 out.go:303] Setting JSON to false
	I0108 21:29:22.022866   51887 mustload.go:65] Loading cluster: default-k8s-diff-port-690577
	I0108 21:29:22.023335   51887 config.go:182] Loaded profile config "default-k8s-diff-port-690577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:29:22.023438   51887 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/config.json ...
	I0108 21:29:22.023629   51887 mustload.go:65] Loading cluster: default-k8s-diff-port-690577
	I0108 21:29:22.023737   51887 config.go:182] Loaded profile config "default-k8s-diff-port-690577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:29:22.023762   51887 stop.go:39] StopHost: default-k8s-diff-port-690577
	I0108 21:29:22.024213   51887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:29:22.024255   51887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:29:22.038629   51887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44531
	I0108 21:29:22.039079   51887 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:29:22.039621   51887 main.go:141] libmachine: Using API Version  1
	I0108 21:29:22.039648   51887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:29:22.040005   51887 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:29:22.042639   51887 out.go:177] * Stopping node "default-k8s-diff-port-690577"  ...
	I0108 21:29:22.044068   51887 main.go:141] libmachine: Stopping "default-k8s-diff-port-690577"...
	I0108 21:29:22.044110   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetState
	I0108 21:29:22.045976   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .Stop
	I0108 21:29:22.049775   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 0/60
	I0108 21:29:23.051269   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 1/60
	I0108 21:29:24.052780   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 2/60
	I0108 21:29:25.054744   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 3/60
	I0108 21:29:26.056292   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 4/60
	I0108 21:29:27.058415   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 5/60
	I0108 21:29:28.059947   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 6/60
	I0108 21:29:29.061418   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 7/60
	I0108 21:29:30.062927   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 8/60
	I0108 21:29:31.064585   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 9/60
	I0108 21:29:32.066947   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 10/60
	I0108 21:29:33.068478   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 11/60
	I0108 21:29:34.070513   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 12/60
	I0108 21:29:35.072104   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 13/60
	I0108 21:29:36.073738   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 14/60
	I0108 21:29:37.075740   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 15/60
	I0108 21:29:38.077095   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 16/60
	I0108 21:29:39.079270   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 17/60
	I0108 21:29:40.080606   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 18/60
	I0108 21:29:41.082206   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 19/60
	I0108 21:29:42.084567   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 20/60
	I0108 21:29:43.086181   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 21/60
	I0108 21:29:44.087568   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 22/60
	I0108 21:29:45.088837   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 23/60
	I0108 21:29:46.090524   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 24/60
	I0108 21:29:47.092669   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 25/60
	I0108 21:29:48.094644   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 26/60
	I0108 21:29:49.096795   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 27/60
	I0108 21:29:50.098262   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 28/60
	I0108 21:29:51.099622   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 29/60
	I0108 21:29:52.102135   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 30/60
	I0108 21:29:53.103484   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 31/60
	I0108 21:29:54.105141   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 32/60
	I0108 21:29:55.106777   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 33/60
	I0108 21:29:56.108302   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 34/60
	I0108 21:29:57.110233   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 35/60
	I0108 21:29:58.111674   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 36/60
	I0108 21:29:59.112873   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 37/60
	I0108 21:30:00.114571   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 38/60
	I0108 21:30:01.115985   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 39/60
	I0108 21:30:02.118188   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 40/60
	I0108 21:30:03.119594   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 41/60
	I0108 21:30:04.122045   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 42/60
	I0108 21:30:05.123867   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 43/60
	I0108 21:30:06.125452   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 44/60
	I0108 21:30:07.127494   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 45/60
	I0108 21:30:08.129022   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 46/60
	I0108 21:30:09.130663   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 47/60
	I0108 21:30:10.132126   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 48/60
	I0108 21:30:11.133694   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 49/60
	I0108 21:30:12.136033   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 50/60
	I0108 21:30:13.137616   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 51/60
	I0108 21:30:14.139033   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 52/60
	I0108 21:30:15.140400   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 53/60
	I0108 21:30:16.142559   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 54/60
	I0108 21:30:17.144447   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 55/60
	I0108 21:30:18.145771   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 56/60
	I0108 21:30:19.147213   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 57/60
	I0108 21:30:20.148833   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 58/60
	I0108 21:30:21.150930   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 59/60
	I0108 21:30:22.152212   51887 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 21:30:22.152277   51887 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:30:22.152295   51887 retry.go:31] will retry after 802.623599ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:30:22.955218   51887 stop.go:39] StopHost: default-k8s-diff-port-690577
	I0108 21:30:22.955584   51887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:30:22.955641   51887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:30:22.970267   51887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41409
	I0108 21:30:22.970712   51887 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:30:22.971195   51887 main.go:141] libmachine: Using API Version  1
	I0108 21:30:22.971220   51887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:30:22.971552   51887 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:30:22.974104   51887 out.go:177] * Stopping node "default-k8s-diff-port-690577"  ...
	I0108 21:30:22.975826   51887 main.go:141] libmachine: Stopping "default-k8s-diff-port-690577"...
	I0108 21:30:22.975851   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetState
	I0108 21:30:22.977643   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .Stop
	I0108 21:30:22.981313   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 0/60
	I0108 21:30:23.982672   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 1/60
	I0108 21:30:24.984125   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 2/60
	I0108 21:30:25.985646   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 3/60
	I0108 21:30:26.987816   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 4/60
	I0108 21:30:27.989475   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 5/60
	I0108 21:30:28.990991   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 6/60
	I0108 21:30:29.992426   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 7/60
	I0108 21:30:30.994709   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 8/60
	I0108 21:30:31.996114   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 9/60
	I0108 21:30:32.997877   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 10/60
	I0108 21:30:33.999304   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 11/60
	I0108 21:30:35.000918   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 12/60
	I0108 21:30:36.002402   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 13/60
	I0108 21:30:37.003768   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 14/60
	I0108 21:30:38.005399   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 15/60
	I0108 21:30:39.006796   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 16/60
	I0108 21:30:40.008366   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 17/60
	I0108 21:30:41.010966   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 18/60
	I0108 21:30:42.012748   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 19/60
	I0108 21:30:43.014039   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 20/60
	I0108 21:30:44.016118   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 21/60
	I0108 21:30:45.017768   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 22/60
	I0108 21:30:46.019680   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 23/60
	I0108 21:30:47.021118   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 24/60
	I0108 21:30:48.022525   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 25/60
	I0108 21:30:49.024298   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 26/60
	I0108 21:30:50.025875   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 27/60
	I0108 21:30:51.027587   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 28/60
	I0108 21:30:52.028966   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 29/60
	I0108 21:30:53.031070   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 30/60
	I0108 21:30:54.032528   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 31/60
	I0108 21:30:55.034104   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 32/60
	I0108 21:30:56.036476   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 33/60
	I0108 21:30:57.037850   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 34/60
	I0108 21:30:58.040080   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 35/60
	I0108 21:30:59.041592   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 36/60
	I0108 21:31:00.042894   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 37/60
	I0108 21:31:01.044244   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 38/60
	I0108 21:31:02.045493   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 39/60
	I0108 21:31:03.047162   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 40/60
	I0108 21:31:04.048676   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 41/60
	I0108 21:31:05.050729   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 42/60
	I0108 21:31:06.052733   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 43/60
	I0108 21:31:07.054860   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 44/60
	I0108 21:31:08.056285   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 45/60
	I0108 21:31:09.057925   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 46/60
	I0108 21:31:10.059375   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 47/60
	I0108 21:31:11.061326   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 48/60
	I0108 21:31:12.062620   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 49/60
	I0108 21:31:13.064708   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 50/60
	I0108 21:31:14.066060   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 51/60
	I0108 21:31:15.067623   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 52/60
	I0108 21:31:16.068932   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 53/60
	I0108 21:31:17.070468   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 54/60
	I0108 21:31:18.072076   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 55/60
	I0108 21:31:19.073870   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 56/60
	I0108 21:31:20.075393   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 57/60
	I0108 21:31:21.076992   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 58/60
	I0108 21:31:22.078404   51887 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for machine to stop 59/60
	I0108 21:31:23.079458   51887 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 21:31:23.079514   51887 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:31:23.081913   51887 out.go:177] 
	W0108 21:31:23.083700   51887 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0108 21:31:23.083723   51887 out.go:239] * 
	* 
	W0108 21:31:23.086374   51887 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:31:23.088141   51887 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-690577 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577: exit status 3 (18.66624121s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:31:41.756379   52393 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.165:22: connect: no route to host
	E0108 21:31:41.756411   52393 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.165:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-690577" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-930023 -n embed-certs-930023
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-930023 -n embed-certs-930023: exit status 3 (3.200395093s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:30:42.492374   52140 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0108 21:30:42.492401   52140 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-930023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-930023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153122948s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-930023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-930023 -n embed-certs-930023
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-930023 -n embed-certs-930023: exit status 3 (3.062104168s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:30:51.708487   52210 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host
	E0108 21:30:51.708509   52210 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.142:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-930023" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577: exit status 3 (3.199590735s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:31:44.956431   52459 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.165:22: connect: no route to host
	E0108 21:31:44.956449   52459 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.165:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-690577 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-690577 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153479659s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.165:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-690577 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577: exit status 3 (3.062602104s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:31:54.172448   52528 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.165:22: connect: no route to host
	E0108 21:31:54.172476   52528 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.165:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-690577" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (468.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 21:34:26.819558   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 21:35:36.429744   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 21:36:04.516623   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 21:37:27.564802   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-879273 -n old-k8s-version-879273
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-08 21:41:52.483998964 +0000 UTC m=+5536.408890325
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-879273 -n old-k8s-version-879273
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-879273 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-879273 logs -n 25: (1.549786589s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | cert-options-686681 ssh                                | cert-options-686681          | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:16 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-686681 -- sudo                         | cert-options-686681          | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:16 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-686681                                 | cert-options-686681          | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:16 UTC |
	| start   | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:16 UTC | 08 Jan 24 21:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-879273             | old-k8s-version-879273       | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-879273                              | old-k8s-version-879273       | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC | 08 Jan 24 21:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p pause-046839                                        | pause-046839                 | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC | 08 Jan 24 21:22 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-420119             | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-001550                              | cert-expiration-001550       | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:22 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-420119                  | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC | 08 Jan 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-001550                              | cert-expiration-001550       | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	| start   | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p pause-046839                                        | pause-046839                 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-216454 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	|         | disable-driver-mounts-216454                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:29 UTC |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-930023            | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-690577  | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC |                     |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-930023                 | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC | 08 Jan 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-690577       | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC | 08 Jan 24 21:40 UTC |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:31:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:31:54.230968   52569 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:31:54.231242   52569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:54.231252   52569 out.go:309] Setting ErrFile to fd 2...
	I0108 21:31:54.231257   52569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:31:54.231475   52569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:31:54.232046   52569 out.go:303] Setting JSON to false
	I0108 21:31:54.232995   52569 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8038,"bootTime":1704741476,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:31:54.233059   52569 start.go:138] virtualization: kvm guest
	I0108 21:31:54.235942   52569 out.go:177] * [default-k8s-diff-port-690577] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:31:54.237854   52569 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:31:54.237886   52569 notify.go:220] Checking for updates...
	I0108 21:31:54.239385   52569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:31:54.241065   52569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:31:54.242480   52569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:31:54.244012   52569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:31:54.245548   52569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:31:54.247529   52569 config.go:182] Loaded profile config "default-k8s-diff-port-690577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:31:54.247970   52569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:31:54.248013   52569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:31:54.263067   52569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0108 21:31:54.263478   52569 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:31:54.264015   52569 main.go:141] libmachine: Using API Version  1
	I0108 21:31:54.264045   52569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:31:54.264400   52569 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:31:54.264583   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:31:54.264809   52569 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:31:54.265138   52569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:31:54.265179   52569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:31:54.279635   52569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
	I0108 21:31:54.280078   52569 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:31:54.280577   52569 main.go:141] libmachine: Using API Version  1
	I0108 21:31:54.280609   52569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:31:54.281023   52569 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:31:54.281233   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:31:54.318616   52569 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 21:31:54.319997   52569 start.go:298] selected driver: kvm2
	I0108 21:31:54.320014   52569 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-690577 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-690577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.165 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:31:54.320168   52569 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:31:54.320820   52569 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:54.320879   52569 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:31:54.335551   52569 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:31:54.336047   52569 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:31:54.336152   52569 cni.go:84] Creating CNI manager for ""
	I0108 21:31:54.336171   52569 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:31:54.336192   52569 start_flags.go:323] config:
	{Name:default-k8s-diff-port-690577 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-69057
7 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.165 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:31:54.336377   52569 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:31:54.338240   52569 out.go:177] * Starting control plane node default-k8s-diff-port-690577 in cluster default-k8s-diff-port-690577
	I0108 21:31:53.686311   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:31:56.186836   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:31:55.776329   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:31:52.420580   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:31:54.917510   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:31:54.339561   52569 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:31:54.339600   52569 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:31:54.339610   52569 cache.go:56] Caching tarball of preloaded images
	I0108 21:31:54.339672   52569 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:31:54.339688   52569 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:31:54.339790   52569 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/config.json ...
	I0108 21:31:54.339965   52569 start.go:365] acquiring machines lock for default-k8s-diff-port-690577: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:31:58.685780   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:00.686405   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:31:58.844379   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:31:56.921332   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:31:58.922540   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:01.418587   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:02.686515   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:04.686665   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:04.924293   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:03.423227   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:05.918197   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:07.186312   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:09.686404   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:07.996360   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:07.919642   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:10.418611   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:12.186832   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:14.685825   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:14.076356   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:12.419386   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:14.920014   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:17.185869   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:19.187942   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:21.688248   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:17.148460   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:17.418652   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:19.418746   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:24.185897   47937 pod_ready.go:102] pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:26.678997   47937 pod_ready.go:81] duration metric: took 4m0.000110825s waiting for pod "metrics-server-74d5856cc6-sl9rl" in "kube-system" namespace to be "Ready" ...
	E0108 21:32:26.679029   47937 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 21:32:26.679048   47937 pod_ready.go:38] duration metric: took 4m1.199787924s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:32:26.679072   47937 kubeadm.go:640] restartCluster took 5m16.51227584s
	W0108 21:32:26.679137   47937 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:32:26.679169   47937 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 21:32:23.228367   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:26.300366   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:21.919128   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:23.923803   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:26.417741   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:28.422566   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:30.920139   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:31.764958   47937 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.085759922s)
	I0108 21:32:31.765038   47937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:32:31.779489   47937 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:32:31.789889   47937 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:32:31.798529   47937 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:32:31.798585   47937 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0108 21:32:32.030205   47937 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:32:32.384309   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:35.452462   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:32.923487   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:35.418334   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:41.532337   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:37.418903   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:39.918743   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:45.176733   47937 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0108 21:32:45.176799   47937 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 21:32:45.176870   47937 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:32:45.176975   47937 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:32:45.177056   47937 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:32:45.177163   47937 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:32:45.177245   47937 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:32:45.177311   47937 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0108 21:32:45.177375   47937 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:32:45.179031   47937 out.go:204]   - Generating certificates and keys ...
	I0108 21:32:45.179129   47937 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 21:32:45.179203   47937 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 21:32:45.179301   47937 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:32:45.179414   47937 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:32:45.179512   47937 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:32:45.179600   47937 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 21:32:45.179686   47937 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:32:45.179781   47937 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:32:45.179881   47937 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:32:45.179979   47937 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:32:45.180029   47937 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 21:32:45.180142   47937 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:32:45.180203   47937 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:32:45.180247   47937 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:32:45.180298   47937 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:32:45.180359   47937 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:32:45.180441   47937 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:32:45.182054   47937 out.go:204]   - Booting up control plane ...
	I0108 21:32:45.182149   47937 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:32:45.182247   47937 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:32:45.182325   47937 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:32:45.182428   47937 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:32:45.182637   47937 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:32:45.182757   47937 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004022 seconds
	I0108 21:32:45.182910   47937 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:32:45.183106   47937 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:32:45.183177   47937 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:32:45.183330   47937 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-879273 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0108 21:32:45.183423   47937 kubeadm.go:322] [bootstrap-token] Using token: mxh3rd.uzequ7tly7u59m4t
	I0108 21:32:45.184937   47937 out.go:204]   - Configuring RBAC rules ...
	I0108 21:32:45.185054   47937 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:32:45.185196   47937 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:32:45.185367   47937 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:32:45.185500   47937 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:32:45.185609   47937 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:32:45.185675   47937 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 21:32:45.185767   47937 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 21:32:45.185778   47937 kubeadm.go:322] 
	I0108 21:32:45.185863   47937 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 21:32:45.185876   47937 kubeadm.go:322] 
	I0108 21:32:45.185957   47937 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 21:32:45.185967   47937 kubeadm.go:322] 
	I0108 21:32:45.185988   47937 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 21:32:45.186057   47937 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:32:45.186099   47937 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:32:45.186105   47937 kubeadm.go:322] 
	I0108 21:32:45.186167   47937 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 21:32:45.186232   47937 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:32:45.186317   47937 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:32:45.186327   47937 kubeadm.go:322] 
	I0108 21:32:45.186458   47937 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I0108 21:32:45.186560   47937 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 21:32:45.186574   47937 kubeadm.go:322] 
	I0108 21:32:45.186687   47937 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token mxh3rd.uzequ7tly7u59m4t \
	I0108 21:32:45.186826   47937 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 \
	I0108 21:32:45.186864   47937 kubeadm.go:322]     --control-plane 	  
	I0108 21:32:45.186874   47937 kubeadm.go:322] 
	I0108 21:32:45.186963   47937 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:32:45.186971   47937 kubeadm.go:322] 
	I0108 21:32:45.187077   47937 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token mxh3rd.uzequ7tly7u59m4t \
	I0108 21:32:45.187199   47937 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 
	I0108 21:32:45.187208   47937 cni.go:84] Creating CNI manager for ""
	I0108 21:32:45.187214   47937 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:32:45.188845   47937 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 21:32:45.190097   47937 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 21:32:45.200454   47937 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 21:32:45.220731   47937 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:32:45.220825   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=old-k8s-version-879273 minikube.k8s.io/updated_at=2024_01_08T21_32_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:45.220966   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:45.277274   47937 ops.go:34] apiserver oom_adj: -16
	I0108 21:32:45.560812   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:46.061584   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:46.561056   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:44.604316   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:41.919604   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:43.926824   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:46.419161   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:47.061624   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:47.561267   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:48.061319   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:48.561460   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:49.061745   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:49.561786   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:50.061368   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:50.561662   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:51.061600   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:51.561216   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:50.684323   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:48.917826   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:50.920164   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:52.061457   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:52.561742   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:53.061311   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:53.561171   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:54.060890   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:54.560860   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:55.061073   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:55.561124   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:56.060933   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:56.561919   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:53.756402   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:53.419547   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:55.421819   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:32:57.061369   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:57.561740   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:58.061835   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:58.561369   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:59.061769   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:32:59.561893   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:00.061027   47937 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:00.167363   47937 kubeadm.go:1088] duration metric: took 14.946432077s to wait for elevateKubeSystemPrivileges.
	I0108 21:33:00.167405   47937 kubeadm.go:406] StartCluster complete in 5m50.054171439s
	I0108 21:33:00.167426   47937 settings.go:142] acquiring lock: {Name:mk91d3baf51872e4bb0758b94fca7c7249bb9666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:33:00.167518   47937 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:33:00.169088   47937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/kubeconfig: {Name:mkeb2e8a20e31c0c2d5c7e8214a27af3141300ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:33:00.169315   47937 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:33:00.169430   47937 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:33:00.169483   47937 config.go:182] Loaded profile config "old-k8s-version-879273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 21:33:00.169514   47937 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-879273"
	I0108 21:33:00.169526   47937 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-879273"
	I0108 21:33:00.169526   47937 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-879273"
	I0108 21:33:00.169533   47937 addons.go:237] Setting addon storage-provisioner=true in "old-k8s-version-879273"
	W0108 21:33:00.169543   47937 addons.go:246] addon storage-provisioner should already be in state true
	I0108 21:33:00.169550   47937 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-879273"
	I0108 21:33:00.169601   47937 host.go:66] Checking if "old-k8s-version-879273" exists ...
	I0108 21:33:00.169545   47937 addons.go:237] Setting addon metrics-server=true in "old-k8s-version-879273"
	W0108 21:33:00.169660   47937 addons.go:246] addon metrics-server should already be in state true
	I0108 21:33:00.169706   47937 host.go:66] Checking if "old-k8s-version-879273" exists ...
	I0108 21:33:00.169993   47937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:00.170036   47937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:00.170048   47937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:00.170083   47937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:00.170095   47937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:00.170123   47937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:00.186177   47937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0108 21:33:00.186681   47937 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:00.187260   47937 main.go:141] libmachine: Using API Version  1
	I0108 21:33:00.187289   47937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:00.187651   47937 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:00.187864   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetState
	I0108 21:33:00.188248   47937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36691
	I0108 21:33:00.188258   47937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33315
	I0108 21:33:00.188701   47937 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:00.188707   47937 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:00.189210   47937 main.go:141] libmachine: Using API Version  1
	I0108 21:33:00.189240   47937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:00.189215   47937 main.go:141] libmachine: Using API Version  1
	I0108 21:33:00.189260   47937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:00.189621   47937 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:00.189671   47937 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:00.190115   47937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:00.190156   47937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:00.190215   47937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:00.190232   47937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:00.191840   47937 addons.go:237] Setting addon default-storageclass=true in "old-k8s-version-879273"
	W0108 21:33:00.191863   47937 addons.go:246] addon default-storageclass should already be in state true
	I0108 21:33:00.191891   47937 host.go:66] Checking if "old-k8s-version-879273" exists ...
	I0108 21:33:00.192271   47937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:00.192314   47937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:00.206292   47937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
	I0108 21:33:00.206782   47937 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:00.207345   47937 main.go:141] libmachine: Using API Version  1
	I0108 21:33:00.207370   47937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:00.207824   47937 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:00.208011   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetState
	I0108 21:33:00.209850   47937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42713
	I0108 21:33:00.210203   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .DriverName
	I0108 21:33:00.211826   47937 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:33:00.210538   47937 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:00.211139   47937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0108 21:33:00.213186   47937 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:33:00.213207   47937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:33:00.213225   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHHostname
	I0108 21:33:00.213442   47937 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:00.213616   47937 main.go:141] libmachine: Using API Version  1
	I0108 21:33:00.213640   47937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:00.213903   47937 main.go:141] libmachine: Using API Version  1
	I0108 21:33:00.213923   47937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:00.214116   47937 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:00.214386   47937 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:00.214470   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetState
	I0108 21:33:00.214959   47937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:00.214984   47937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:00.216771   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .DriverName
	I0108 21:33:00.218588   47937 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 21:33:00.217280   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | domain old-k8s-version-879273 has defined MAC address 52:54:00:d7:3b:14 in network mk-old-k8s-version-879273
	I0108 21:33:00.218184   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHPort
	I0108 21:33:00.220139   47937 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:33:00.220157   47937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:33:00.220158   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:3b:14", ip: ""} in network mk-old-k8s-version-879273: {Iface:virbr3 ExpiryTime:2024-01-08 22:26:52 +0000 UTC Type:0 Mac:52:54:00:d7:3b:14 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:old-k8s-version-879273 Clientid:01:52:54:00:d7:3b:14}
	I0108 21:33:00.220180   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHHostname
	I0108 21:33:00.220192   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | domain old-k8s-version-879273 has defined IP address 192.168.61.130 and MAC address 52:54:00:d7:3b:14 in network mk-old-k8s-version-879273
	I0108 21:33:00.220363   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHKeyPath
	I0108 21:33:00.220550   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHUsername
	I0108 21:33:00.220708   47937 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/old-k8s-version-879273/id_rsa Username:docker}
	I0108 21:33:00.223639   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | domain old-k8s-version-879273 has defined MAC address 52:54:00:d7:3b:14 in network mk-old-k8s-version-879273
	I0108 21:33:00.224322   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:3b:14", ip: ""} in network mk-old-k8s-version-879273: {Iface:virbr3 ExpiryTime:2024-01-08 22:26:52 +0000 UTC Type:0 Mac:52:54:00:d7:3b:14 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:old-k8s-version-879273 Clientid:01:52:54:00:d7:3b:14}
	I0108 21:33:00.224346   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | domain old-k8s-version-879273 has defined IP address 192.168.61.130 and MAC address 52:54:00:d7:3b:14 in network mk-old-k8s-version-879273
	I0108 21:33:00.224612   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHPort
	I0108 21:33:00.224840   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHKeyPath
	I0108 21:33:00.225016   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHUsername
	I0108 21:33:00.225154   47937 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/old-k8s-version-879273/id_rsa Username:docker}
	I0108 21:33:00.236222   47937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0108 21:33:00.236741   47937 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:00.237188   47937 main.go:141] libmachine: Using API Version  1
	I0108 21:33:00.237211   47937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:00.237624   47937 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:00.237787   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetState
	I0108 21:33:00.240004   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .DriverName
	I0108 21:33:00.240247   47937 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:33:00.240263   47937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:33:00.240283   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHHostname
	I0108 21:33:00.243430   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | domain old-k8s-version-879273 has defined MAC address 52:54:00:d7:3b:14 in network mk-old-k8s-version-879273
	I0108 21:33:00.243922   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:3b:14", ip: ""} in network mk-old-k8s-version-879273: {Iface:virbr3 ExpiryTime:2024-01-08 22:26:52 +0000 UTC Type:0 Mac:52:54:00:d7:3b:14 Iaid: IPaddr:192.168.61.130 Prefix:24 Hostname:old-k8s-version-879273 Clientid:01:52:54:00:d7:3b:14}
	I0108 21:33:00.243951   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | domain old-k8s-version-879273 has defined IP address 192.168.61.130 and MAC address 52:54:00:d7:3b:14 in network mk-old-k8s-version-879273
	I0108 21:33:00.244164   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHPort
	I0108 21:33:00.244362   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHKeyPath
	I0108 21:33:00.244531   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .GetSSHUsername
	I0108 21:33:00.244662   47937 sshutil.go:53] new ssh client: &{IP:192.168.61.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/old-k8s-version-879273/id_rsa Username:docker}
	I0108 21:33:00.358091   47937 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:33:00.390414   47937 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:33:00.390444   47937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 21:33:00.391262   47937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:33:00.441895   47937 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:33:00.441926   47937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:33:00.444542   47937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:33:00.502168   47937 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:33:00.502189   47937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:33:00.591503   47937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:33:00.989270   47937 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-879273" context rescaled to 1 replicas
	I0108 21:33:00.989317   47937 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.130 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:33:00.991789   47937 out.go:177] * Verifying Kubernetes components...
	I0108 21:33:00.993849   47937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:33:01.090533   47937 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0108 21:32:59.836449   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:32:57.923563   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:33:00.421515   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:33:02.141026   47937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.696447435s)
	I0108 21:33:02.141080   47937 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:02.141096   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .Close
	I0108 21:33:02.141234   47937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.7499451s)
	I0108 21:33:02.141272   47937 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:02.141290   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .Close
	I0108 21:33:02.141523   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | Closing plugin on server side
	I0108 21:33:02.141541   47937 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:02.141571   47937 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:02.141582   47937 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:02.141583   47937 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:02.141595   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .Close
	I0108 21:33:02.141597   47937 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:02.141608   47937 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:02.141620   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .Close
	I0108 21:33:02.141582   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | Closing plugin on server side
	I0108 21:33:02.141934   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | Closing plugin on server side
	I0108 21:33:02.141956   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | Closing plugin on server side
	I0108 21:33:02.141981   47937 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:02.141983   47937 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:02.141990   47937 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:02.141999   47937 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:02.161482   47937 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:02.161510   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .Close
	I0108 21:33:02.161813   47937 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:02.161833   47937 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:02.199746   47937 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.205864353s)
	I0108 21:33:02.199795   47937 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-879273" to be "Ready" ...
	I0108 21:33:02.199749   47937 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.60818397s)
	I0108 21:33:02.199928   47937 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:02.199948   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .Close
	I0108 21:33:02.200264   47937 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:02.200288   47937 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:02.200299   47937 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:02.200309   47937 main.go:141] libmachine: (old-k8s-version-879273) Calling .Close
	I0108 21:33:02.200529   47937 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:02.200544   47937 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:02.200568   47937 main.go:141] libmachine: (old-k8s-version-879273) DBG | Closing plugin on server side
	I0108 21:33:02.200570   47937 addons.go:473] Verifying addon metrics-server=true in "old-k8s-version-879273"
	I0108 21:33:02.202880   47937 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0108 21:33:02.204397   47937 addons.go:508] enable addons completed in 2.034964564s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0108 21:33:02.211388   47937 node_ready.go:49] node "old-k8s-version-879273" has status "Ready":"True"
	I0108 21:33:02.211410   47937 node_ready.go:38] duration metric: took 11.605517ms waiting for node "old-k8s-version-879273" to be "Ready" ...
	I0108 21:33:02.211421   47937 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:33:02.224605   47937 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mjzz4" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:04.232608   47937 pod_ready.go:102] pod "coredns-5644d7b6d9-mjzz4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:33:06.228921   47937 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-mjzz4" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mjzz4" not found
	I0108 21:33:06.228948   47937 pod_ready.go:81] duration metric: took 4.004308645s waiting for pod "coredns-5644d7b6d9-mjzz4" in "kube-system" namespace to be "Ready" ...
	E0108 21:33:06.228959   47937 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-mjzz4" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-mjzz4" not found
	I0108 21:33:06.228976   47937 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-mz6r2" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:02.908431   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:02.422221   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:33:04.921408   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:33:08.236314   47937 pod_ready.go:102] pod "coredns-5644d7b6d9-mz6r2" in "kube-system" namespace has status "Ready":"False"
	I0108 21:33:08.750671   47937 pod_ready.go:92] pod "coredns-5644d7b6d9-mz6r2" in "kube-system" namespace has status "Ready":"True"
	I0108 21:33:08.750698   47937 pod_ready.go:81] duration metric: took 2.52171506s waiting for pod "coredns-5644d7b6d9-mz6r2" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:08.750711   47937 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lk26t" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:08.758632   47937 pod_ready.go:92] pod "kube-proxy-lk26t" in "kube-system" namespace has status "Ready":"True"
	I0108 21:33:08.758661   47937 pod_ready.go:81] duration metric: took 7.941734ms waiting for pod "kube-proxy-lk26t" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:08.758673   47937 pod_ready.go:38] duration metric: took 6.547240094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:33:08.758690   47937 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:33:08.758753   47937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:33:08.775694   47937 api_server.go:72] duration metric: took 7.786346276s to wait for apiserver process to appear ...
	I0108 21:33:08.775717   47937 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:33:08.775744   47937 api_server.go:253] Checking apiserver healthz at https://192.168.61.130:8443/healthz ...
	I0108 21:33:08.782811   47937 api_server.go:279] https://192.168.61.130:8443/healthz returned 200:
	ok
	I0108 21:33:08.783721   47937 api_server.go:141] control plane version: v1.16.0
	I0108 21:33:08.783742   47937 api_server.go:131] duration metric: took 8.018323ms to wait for apiserver health ...
	I0108 21:33:08.783751   47937 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:33:08.790625   47937 system_pods.go:59] 4 kube-system pods found
	I0108 21:33:08.790657   47937 system_pods.go:61] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:08.790665   47937 system_pods.go:61] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:08.790676   47937 system_pods.go:61] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:08.790684   47937 system_pods.go:61] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:08.790695   47937 system_pods.go:74] duration metric: took 6.936956ms to wait for pod list to return data ...
	I0108 21:33:08.790722   47937 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:33:08.796595   47937 default_sa.go:45] found service account: "default"
	I0108 21:33:08.796622   47937 default_sa.go:55] duration metric: took 5.893821ms for default service account to be created ...
	I0108 21:33:08.796631   47937 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:33:08.801560   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:08.801589   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:08.801600   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:08.801611   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:08.801618   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:08.801639   47937 retry.go:31] will retry after 274.409668ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:09.083697   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:09.083727   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:09.083734   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:09.083743   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:09.083749   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:09.083771   47937 retry.go:31] will retry after 272.685481ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:09.361923   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:09.361960   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:09.361968   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:09.361976   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:09.361981   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:09.361998   47937 retry.go:31] will retry after 471.299078ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:09.839427   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:09.839479   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:09.839488   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:09.839498   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:09.839504   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:09.839526   47937 retry.go:31] will retry after 506.066ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:10.350323   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:10.350349   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:10.350354   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:10.350361   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:10.350365   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:10.350381   47937 retry.go:31] will retry after 699.123229ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:11.054230   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:11.054252   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:11.054257   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:11.054263   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:11.054268   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:11.054284   47937 retry.go:31] will retry after 706.567154ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:08.992380   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:07.418208   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace has status "Ready":"False"
	I0108 21:33:08.911650   49554 pod_ready.go:81] duration metric: took 4m0.000827147s waiting for pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace to be "Ready" ...
	E0108 21:33:08.911681   49554 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5fmgh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0108 21:33:08.911704   49554 pod_ready.go:38] duration metric: took 4m11.051949197s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:33:08.911734   49554 kubeadm.go:640] restartCluster took 4m31.256984825s
	W0108 21:33:08.911816   49554 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0108 21:33:08.911852   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0108 21:33:11.766351   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:11.766382   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:11.766387   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:11.766395   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:11.766400   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:11.766416   47937 retry.go:31] will retry after 736.075959ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:12.507010   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:12.507036   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:12.507042   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:12.507049   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:12.507055   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:12.507074   47937 retry.go:31] will retry after 1.260590002s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:13.773104   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:13.773129   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:13.773134   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:13.773141   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:13.773145   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:13.773165   47937 retry.go:31] will retry after 1.58689405s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:15.365200   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:15.365236   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:15.365243   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:15.365250   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:15.365255   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:15.365271   47937 retry.go:31] will retry after 2.233977288s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:12.060358   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:17.605671   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:17.605727   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:17.605734   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:17.605743   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:17.605748   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:17.605765   47937 retry.go:31] will retry after 1.994986411s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:19.607290   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:19.607331   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:19.607338   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:19.607349   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:19.607356   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:19.607373   47937 retry.go:31] will retry after 3.498149407s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:18.140334   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:21.212355   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:23.111934   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:23.111966   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:23.111975   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:23.111986   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:23.111994   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:23.112024   47937 retry.go:31] will retry after 4.510324323s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:22.951387   49554 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.039509213s)
	I0108 21:33:22.951452   49554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:33:22.965964   49554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:33:22.976170   49554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:33:22.985878   49554 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:33:22.985927   49554 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 21:33:23.192108   49554 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:33:27.627212   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:27.627239   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:27.627244   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:27.627252   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:27.627257   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:27.627272   47937 retry.go:31] will retry after 3.77258239s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:31.405989   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:31.406016   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:31.406021   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:31.406029   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:31.406033   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:31.406048   47937 retry.go:31] will retry after 5.936863669s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:27.292355   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:30.364381   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:33.605023   49554 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0108 21:33:33.605084   49554 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 21:33:33.605296   49554 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:33:33.605439   49554 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:33:33.605562   49554 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:33:33.605638   49554 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:33:33.607481   49554 out.go:204]   - Generating certificates and keys ...
	I0108 21:33:33.607585   49554 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 21:33:33.607677   49554 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 21:33:33.607778   49554 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0108 21:33:33.607878   49554 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0108 21:33:33.607977   49554 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0108 21:33:33.608066   49554 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0108 21:33:33.608172   49554 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0108 21:33:33.608278   49554 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0108 21:33:33.608385   49554 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0108 21:33:33.608494   49554 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0108 21:33:33.608535   49554 kubeadm.go:322] [certs] Using the existing "sa" key
	I0108 21:33:33.608589   49554 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:33:33.608640   49554 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:33:33.608690   49554 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0108 21:33:33.608733   49554 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:33:33.608785   49554 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:33:33.608859   49554 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:33:33.608976   49554 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:33:33.609072   49554 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:33:33.611039   49554 out.go:204]   - Booting up control plane ...
	I0108 21:33:33.611147   49554 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:33:33.611214   49554 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:33:33.611273   49554 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:33:33.611389   49554 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:33:33.611528   49554 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:33:33.611597   49554 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 21:33:33.611798   49554 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:33:33.611909   49554 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503153 seconds
	I0108 21:33:33.612015   49554 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:33:33.612168   49554 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:33:33.612260   49554 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:33:33.612409   49554 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-420119 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:33:33.612488   49554 kubeadm.go:322] [bootstrap-token] Using token: pnawfr.88s1ud77c5647113
	I0108 21:33:33.614112   49554 out.go:204]   - Configuring RBAC rules ...
	I0108 21:33:33.614241   49554 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:33:33.614354   49554 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:33:33.614559   49554 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:33:33.614738   49554 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:33:33.614884   49554 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:33:33.615009   49554 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:33:33.615186   49554 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:33:33.615278   49554 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 21:33:33.615345   49554 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 21:33:33.615359   49554 kubeadm.go:322] 
	I0108 21:33:33.615430   49554 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 21:33:33.615445   49554 kubeadm.go:322] 
	I0108 21:33:33.615563   49554 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 21:33:33.615575   49554 kubeadm.go:322] 
	I0108 21:33:33.615606   49554 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 21:33:33.615683   49554 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:33:33.615754   49554 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:33:33.615763   49554 kubeadm.go:322] 
	I0108 21:33:33.615822   49554 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 21:33:33.615833   49554 kubeadm.go:322] 
	I0108 21:33:33.615895   49554 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:33:33.615905   49554 kubeadm.go:322] 
	I0108 21:33:33.615969   49554 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 21:33:33.616075   49554 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:33:33.616155   49554 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:33:33.616163   49554 kubeadm.go:322] 
	I0108 21:33:33.616268   49554 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:33:33.616368   49554 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 21:33:33.616378   49554 kubeadm.go:322] 
	I0108 21:33:33.616485   49554 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pnawfr.88s1ud77c5647113 \
	I0108 21:33:33.616613   49554 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 \
	I0108 21:33:33.616647   49554 kubeadm.go:322] 	--control-plane 
	I0108 21:33:33.616656   49554 kubeadm.go:322] 
	I0108 21:33:33.616768   49554 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:33:33.616783   49554 kubeadm.go:322] 
	I0108 21:33:33.616849   49554 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pnawfr.88s1ud77c5647113 \
	I0108 21:33:33.616946   49554 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c8c1be52030936a70632c8042b36c1f0572b8047d898b1d332e0bb01536ba717 
	I0108 21:33:33.616972   49554 cni.go:84] Creating CNI manager for ""
	I0108 21:33:33.616984   49554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:33:33.618729   49554 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 21:33:36.444382   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:33.620132   49554 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 21:33:33.650873   49554 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 21:33:33.709598   49554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:33:33.709656   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:33.709692   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28 minikube.k8s.io/name=no-preload-420119 minikube.k8s.io/updated_at=2024_01_08T21_33_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:34.199464   49554 ops.go:34] apiserver oom_adj: -16
	I0108 21:33:34.199684   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:34.699985   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:35.200687   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:35.699880   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:36.199898   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:36.700174   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:37.348618   47937 system_pods.go:86] 4 kube-system pods found
	I0108 21:33:37.348648   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:37.348656   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:37.348666   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:37.348674   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:37.348693   47937 retry.go:31] will retry after 6.367715652s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:39.516336   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:37.200056   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:37.699827   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:38.200654   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:38.700613   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:39.199935   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:39.700633   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:40.200433   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:40.700219   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:41.199834   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:41.700035   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:42.200187   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:42.700319   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:43.199899   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:43.700385   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:44.199822   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:44.700203   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:45.199979   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:45.700677   49554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:33:45.812913   49554 kubeadm.go:1088] duration metric: took 12.103309678s to wait for elevateKubeSystemPrivileges.
	I0108 21:33:45.812951   49554 kubeadm.go:406] StartCluster complete in 5m8.21025389s
	I0108 21:33:45.812975   49554 settings.go:142] acquiring lock: {Name:mk91d3baf51872e4bb0758b94fca7c7249bb9666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:33:45.813058   49554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:33:45.814711   49554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/kubeconfig: {Name:mkeb2e8a20e31c0c2d5c7e8214a27af3141300ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:33:45.815009   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:33:45.815043   49554 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:33:45.815135   49554 addons.go:69] Setting storage-provisioner=true in profile "no-preload-420119"
	I0108 21:33:45.815141   49554 addons.go:69] Setting default-storageclass=true in profile "no-preload-420119"
	I0108 21:33:45.815157   49554 addons.go:237] Setting addon storage-provisioner=true in "no-preload-420119"
	I0108 21:33:45.815156   49554 addons.go:69] Setting metrics-server=true in profile "no-preload-420119"
	W0108 21:33:45.815166   49554 addons.go:246] addon storage-provisioner should already be in state true
	I0108 21:33:45.815183   49554 addons.go:237] Setting addon metrics-server=true in "no-preload-420119"
	W0108 21:33:45.815196   49554 addons.go:246] addon metrics-server should already be in state true
	I0108 21:33:45.815224   49554 host.go:66] Checking if "no-preload-420119" exists ...
	I0108 21:33:45.815241   49554 config.go:182] Loaded profile config "no-preload-420119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:33:45.815247   49554 host.go:66] Checking if "no-preload-420119" exists ...
	I0108 21:33:45.815160   49554 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-420119"
	I0108 21:33:45.815633   49554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:45.815634   49554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:45.815686   49554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:45.815706   49554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:45.815763   49554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:45.815784   49554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:45.831843   49554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32813
	I0108 21:33:45.831867   49554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37119
	I0108 21:33:45.832312   49554 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:45.832337   49554 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:45.832847   49554 main.go:141] libmachine: Using API Version  1
	I0108 21:33:45.832873   49554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:45.833013   49554 main.go:141] libmachine: Using API Version  1
	I0108 21:33:45.833039   49554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:45.833440   49554 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:45.833441   49554 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:45.833665   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetState
	I0108 21:33:45.833921   49554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38763
	I0108 21:33:45.834058   49554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:45.834099   49554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:45.834402   49554 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:45.834970   49554 main.go:141] libmachine: Using API Version  1
	I0108 21:33:45.834994   49554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:45.835382   49554 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:45.835870   49554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:45.835953   49554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:45.837299   49554 addons.go:237] Setting addon default-storageclass=true in "no-preload-420119"
	W0108 21:33:45.837315   49554 addons.go:246] addon default-storageclass should already be in state true
	I0108 21:33:45.837334   49554 host.go:66] Checking if "no-preload-420119" exists ...
	I0108 21:33:45.837576   49554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:45.837610   49554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:45.852615   49554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39361
	I0108 21:33:45.853134   49554 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:45.853703   49554 main.go:141] libmachine: Using API Version  1
	I0108 21:33:45.853752   49554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:45.854112   49554 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:45.854364   49554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42797
	I0108 21:33:45.854760   49554 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:45.854796   49554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:33:45.854831   49554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:33:45.855368   49554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I0108 21:33:45.855422   49554 main.go:141] libmachine: Using API Version  1
	I0108 21:33:45.855440   49554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:45.855766   49554 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:45.856007   49554 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:45.856259   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetState
	I0108 21:33:45.856360   49554 main.go:141] libmachine: Using API Version  1
	I0108 21:33:45.856388   49554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:45.856680   49554 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:45.856852   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetState
	I0108 21:33:45.858475   49554 main.go:141] libmachine: (no-preload-420119) Calling .DriverName
	I0108 21:33:45.858769   49554 main.go:141] libmachine: (no-preload-420119) Calling .DriverName
	I0108 21:33:45.860924   49554 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 21:33:45.862652   49554 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:33:45.862702   49554 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:33:45.864391   49554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:33:45.864418   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHHostname
	I0108 21:33:45.864364   49554 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:33:45.864499   49554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:33:45.864511   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHHostname
	I0108 21:33:45.868342   49554 main.go:141] libmachine: (no-preload-420119) DBG | domain no-preload-420119 has defined MAC address 52:54:00:a2:1b:91 in network mk-no-preload-420119
	I0108 21:33:45.868566   49554 main.go:141] libmachine: (no-preload-420119) DBG | domain no-preload-420119 has defined MAC address 52:54:00:a2:1b:91 in network mk-no-preload-420119
	I0108 21:33:45.868907   49554 main.go:141] libmachine: (no-preload-420119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1b:91", ip: ""} in network mk-no-preload-420119: {Iface:virbr1 ExpiryTime:2024-01-08 22:17:09 +0000 UTC Type:0 Mac:52:54:00:a2:1b:91 Iaid: IPaddr:192.168.83.226 Prefix:24 Hostname:no-preload-420119 Clientid:01:52:54:00:a2:1b:91}
	I0108 21:33:45.868935   49554 main.go:141] libmachine: (no-preload-420119) DBG | domain no-preload-420119 has defined IP address 192.168.83.226 and MAC address 52:54:00:a2:1b:91 in network mk-no-preload-420119
	I0108 21:33:45.868969   49554 main.go:141] libmachine: (no-preload-420119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1b:91", ip: ""} in network mk-no-preload-420119: {Iface:virbr1 ExpiryTime:2024-01-08 22:17:09 +0000 UTC Type:0 Mac:52:54:00:a2:1b:91 Iaid: IPaddr:192.168.83.226 Prefix:24 Hostname:no-preload-420119 Clientid:01:52:54:00:a2:1b:91}
	I0108 21:33:45.868988   49554 main.go:141] libmachine: (no-preload-420119) DBG | domain no-preload-420119 has defined IP address 192.168.83.226 and MAC address 52:54:00:a2:1b:91 in network mk-no-preload-420119
	I0108 21:33:45.869250   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHPort
	I0108 21:33:45.869315   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHPort
	I0108 21:33:45.869438   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHKeyPath
	I0108 21:33:45.869492   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHKeyPath
	I0108 21:33:45.869560   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHUsername
	I0108 21:33:45.869654   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHUsername
	I0108 21:33:45.869709   49554 sshutil.go:53] new ssh client: &{IP:192.168.83.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/no-preload-420119/id_rsa Username:docker}
	I0108 21:33:45.870075   49554 sshutil.go:53] new ssh client: &{IP:192.168.83.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/no-preload-420119/id_rsa Username:docker}
	I0108 21:33:45.872920   49554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0108 21:33:45.873276   49554 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:33:45.873849   49554 main.go:141] libmachine: Using API Version  1
	I0108 21:33:45.873873   49554 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:33:45.874278   49554 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:33:45.874466   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetState
	I0108 21:33:45.876189   49554 main.go:141] libmachine: (no-preload-420119) Calling .DriverName
	I0108 21:33:45.876468   49554 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:33:45.876487   49554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:33:45.876504   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHHostname
	I0108 21:33:45.879174   49554 main.go:141] libmachine: (no-preload-420119) DBG | domain no-preload-420119 has defined MAC address 52:54:00:a2:1b:91 in network mk-no-preload-420119
	I0108 21:33:45.879463   49554 main.go:141] libmachine: (no-preload-420119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:1b:91", ip: ""} in network mk-no-preload-420119: {Iface:virbr1 ExpiryTime:2024-01-08 22:17:09 +0000 UTC Type:0 Mac:52:54:00:a2:1b:91 Iaid: IPaddr:192.168.83.226 Prefix:24 Hostname:no-preload-420119 Clientid:01:52:54:00:a2:1b:91}
	I0108 21:33:45.879487   49554 main.go:141] libmachine: (no-preload-420119) DBG | domain no-preload-420119 has defined IP address 192.168.83.226 and MAC address 52:54:00:a2:1b:91 in network mk-no-preload-420119
	I0108 21:33:45.879666   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHPort
	I0108 21:33:45.879938   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHKeyPath
	I0108 21:33:45.880083   49554 main.go:141] libmachine: (no-preload-420119) Calling .GetSSHUsername
	I0108 21:33:45.880263   49554 sshutil.go:53] new ssh client: &{IP:192.168.83.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/no-preload-420119/id_rsa Username:docker}
	I0108 21:33:46.003208   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:33:46.019326   49554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:33:46.050825   49554 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:33:46.050856   49554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 21:33:46.071674   49554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:33:46.144657   49554 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:33:46.144694   49554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:33:46.231478   49554 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:33:46.231505   49554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:33:46.340047   49554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:33:46.352773   49554 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-420119" context rescaled to 1 replicas
	I0108 21:33:46.352823   49554 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.226 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:33:46.355282   49554 out.go:177] * Verifying Kubernetes components...
	I0108 21:33:43.721604   47937 system_pods.go:86] 5 kube-system pods found
	I0108 21:33:43.721641   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:43.721650   47937 system_pods.go:89] "etcd-old-k8s-version-879273" [423e3dd3-c872-4192-8d04-2c911abb7673] Pending
	I0108 21:33:43.721656   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:43.721666   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:43.721673   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:43.721696   47937 retry.go:31] will retry after 10.652592146s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0108 21:33:45.600415   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:46.357406   49554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:33:46.899689   49554 start.go:929] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0108 21:33:46.899786   49554 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:46.899812   49554 main.go:141] libmachine: (no-preload-420119) Calling .Close
	I0108 21:33:46.900127   49554 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:46.900150   49554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:46.900160   49554 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:46.900168   49554 main.go:141] libmachine: (no-preload-420119) Calling .Close
	I0108 21:33:46.900420   49554 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:46.900442   49554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:46.936990   49554 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:46.937021   49554 main.go:141] libmachine: (no-preload-420119) Calling .Close
	I0108 21:33:46.937363   49554 main.go:141] libmachine: (no-preload-420119) DBG | Closing plugin on server side
	I0108 21:33:46.937420   49554 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:46.937434   49554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:47.164681   49554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.092963666s)
	I0108 21:33:47.164752   49554 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:47.164766   49554 main.go:141] libmachine: (no-preload-420119) Calling .Close
	I0108 21:33:47.165231   49554 main.go:141] libmachine: (no-preload-420119) DBG | Closing plugin on server side
	I0108 21:33:47.165234   49554 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:47.165263   49554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:47.165278   49554 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:47.165288   49554 main.go:141] libmachine: (no-preload-420119) Calling .Close
	I0108 21:33:47.165652   49554 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:47.165719   49554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:47.165680   49554 main.go:141] libmachine: (no-preload-420119) DBG | Closing plugin on server side
	I0108 21:33:47.494235   49554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.154136956s)
	I0108 21:33:47.494265   49554 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.136818933s)
	I0108 21:33:47.494307   49554 node_ready.go:35] waiting up to 6m0s for node "no-preload-420119" to be "Ready" ...
	I0108 21:33:47.494312   49554 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:47.494326   49554 main.go:141] libmachine: (no-preload-420119) Calling .Close
	I0108 21:33:47.494713   49554 main.go:141] libmachine: (no-preload-420119) DBG | Closing plugin on server side
	I0108 21:33:47.494752   49554 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:47.494773   49554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:47.494792   49554 main.go:141] libmachine: Making call to close driver server
	I0108 21:33:47.494801   49554 main.go:141] libmachine: (no-preload-420119) Calling .Close
	I0108 21:33:47.495054   49554 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:33:47.495103   49554 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:33:47.495088   49554 main.go:141] libmachine: (no-preload-420119) DBG | Closing plugin on server side
	I0108 21:33:47.495118   49554 addons.go:473] Verifying addon metrics-server=true in "no-preload-420119"
	I0108 21:33:47.497316   49554 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0108 21:33:48.668377   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:47.499057   49554 addons.go:508] enable addons completed in 1.684027462s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0108 21:33:47.516551   49554 node_ready.go:49] node "no-preload-420119" has status "Ready":"True"
	I0108 21:33:47.516591   49554 node_ready.go:38] duration metric: took 22.271023ms waiting for node "no-preload-420119" to be "Ready" ...
	I0108 21:33:47.516607   49554 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:33:47.524877   49554 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5jpjt" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.036990   49554 pod_ready.go:92] pod "coredns-76f75df574-5jpjt" in "kube-system" namespace has status "Ready":"True"
	I0108 21:33:49.037014   49554 pod_ready.go:81] duration metric: took 1.512095077s waiting for pod "coredns-76f75df574-5jpjt" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.037023   49554 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-qrds6" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.040903   49554 pod_ready.go:97] error getting pod "coredns-76f75df574-qrds6" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-qrds6" not found
	I0108 21:33:49.040936   49554 pod_ready.go:81] duration metric: took 3.903549ms waiting for pod "coredns-76f75df574-qrds6" in "kube-system" namespace to be "Ready" ...
	E0108 21:33:49.040949   49554 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-qrds6" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-qrds6" not found
	I0108 21:33:49.040958   49554 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-420119" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.060814   49554 pod_ready.go:92] pod "etcd-no-preload-420119" in "kube-system" namespace has status "Ready":"True"
	I0108 21:33:49.060837   49554 pod_ready.go:81] duration metric: took 19.871085ms waiting for pod "etcd-no-preload-420119" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.060848   49554 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-420119" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.078264   49554 pod_ready.go:92] pod "kube-apiserver-no-preload-420119" in "kube-system" namespace has status "Ready":"True"
	I0108 21:33:49.078294   49554 pod_ready.go:81] duration metric: took 17.438508ms waiting for pod "kube-apiserver-no-preload-420119" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.078308   49554 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-420119" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.090019   49554 pod_ready.go:92] pod "kube-controller-manager-no-preload-420119" in "kube-system" namespace has status "Ready":"True"
	I0108 21:33:49.090052   49554 pod_ready.go:81] duration metric: took 11.735737ms waiting for pod "kube-controller-manager-no-preload-420119" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.090067   49554 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pxmhr" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.299217   49554 pod_ready.go:92] pod "kube-proxy-pxmhr" in "kube-system" namespace has status "Ready":"True"
	I0108 21:33:49.299253   49554 pod_ready.go:81] duration metric: took 209.177102ms waiting for pod "kube-proxy-pxmhr" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.299267   49554 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-420119" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.698381   49554 pod_ready.go:92] pod "kube-scheduler-no-preload-420119" in "kube-system" namespace has status "Ready":"True"
	I0108 21:33:49.698405   49554 pod_ready.go:81] duration metric: took 399.13032ms waiting for pod "kube-scheduler-no-preload-420119" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:49.698415   49554 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace to be "Ready" ...
	I0108 21:33:51.707085   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:33:54.380972   47937 system_pods.go:86] 6 kube-system pods found
	I0108 21:33:54.380999   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:33:54.381004   47937 system_pods.go:89] "etcd-old-k8s-version-879273" [423e3dd3-c872-4192-8d04-2c911abb7673] Running
	I0108 21:33:54.381009   47937 system_pods.go:89] "kube-apiserver-old-k8s-version-879273" [2c51a63b-0cc6-4220-a04c-8624bb8cade4] Running
	I0108 21:33:54.381014   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:33:54.381021   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:33:54.381025   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:33:54.381039   47937 retry.go:31] will retry after 12.157376788s: missing components: kube-controller-manager, kube-scheduler
	I0108 21:33:54.748380   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:54.207023   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:33:56.707230   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:33:57.820398   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:33:59.206427   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:01.206777   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:06.545165   47937 system_pods.go:86] 8 kube-system pods found
	I0108 21:34:06.545193   47937 system_pods.go:89] "coredns-5644d7b6d9-mz6r2" [af44b760-04e8-461b-9bd7-36bf0c631744] Running
	I0108 21:34:06.545198   47937 system_pods.go:89] "etcd-old-k8s-version-879273" [423e3dd3-c872-4192-8d04-2c911abb7673] Running
	I0108 21:34:06.545203   47937 system_pods.go:89] "kube-apiserver-old-k8s-version-879273" [2c51a63b-0cc6-4220-a04c-8624bb8cade4] Running
	I0108 21:34:06.545207   47937 system_pods.go:89] "kube-controller-manager-old-k8s-version-879273" [9d210a53-3c5c-47e9-bf6c-4f8cc07028da] Running
	I0108 21:34:06.545211   47937 system_pods.go:89] "kube-proxy-lk26t" [6fd54061-1f29-4beb-9d69-fa6b747e4946] Running
	I0108 21:34:06.545215   47937 system_pods.go:89] "kube-scheduler-old-k8s-version-879273" [a08e727e-847b-4608-ac13-9bb50a2b1b11] Running
	I0108 21:34:06.545222   47937 system_pods.go:89] "metrics-server-74d5856cc6-fckkc" [32c88827-5a4d-47f7-8484-bce82bfafdc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:34:06.545227   47937 system_pods.go:89] "storage-provisioner" [a262224e-beec-4c9a-ab5e-4d8b5b5a84b5] Running
	I0108 21:34:06.545235   47937 system_pods.go:126] duration metric: took 57.74859912s to wait for k8s-apps to be running ...
	I0108 21:34:06.545241   47937 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:34:06.545284   47937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:34:06.562502   47937 system_svc.go:56] duration metric: took 17.249988ms WaitForService to wait for kubelet.
	I0108 21:34:06.562538   47937 kubeadm.go:581] duration metric: took 1m5.57319547s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:34:06.562562   47937 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:34:06.566322   47937 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:34:06.566348   47937 node_conditions.go:123] node cpu capacity is 2
	I0108 21:34:06.566360   47937 node_conditions.go:105] duration metric: took 3.791668ms to run NodePressure ...
	I0108 21:34:06.566373   47937 start.go:228] waiting for startup goroutines ...
	I0108 21:34:06.566381   47937 start.go:233] waiting for cluster config update ...
	I0108 21:34:06.566398   47937 start.go:242] writing updated cluster config ...
	I0108 21:34:06.566670   47937 ssh_runner.go:195] Run: rm -f paused
	I0108 21:34:06.614936   47937 start.go:600] kubectl: 1.29.0, cluster: 1.16.0 (minor skew: 13)
	I0108 21:34:06.617135   47937 out.go:177] 
	W0108 21:34:06.618565   47937 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.16.0.
	I0108 21:34:06.620154   47937 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0108 21:34:06.621683   47937 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-879273" cluster and "default" namespace by default
	I0108 21:34:03.904332   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:03.209047   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:05.706054   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:06.972345   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:07.706115   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:09.706187   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:11.706573   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:13.052356   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:16.124297   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:14.205999   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:16.206243   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:18.706458   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:20.707949   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:22.204304   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:25.276381   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:23.207679   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:25.707236   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:31.356330   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:28.206470   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:30.706398   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:34.428396   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:33.208691   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:35.708676   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:40.508360   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:37.708803   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:40.205842   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:43.580416   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:42.205959   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:44.206019   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:46.706910   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:49.660371   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:49.206618   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:51.706787   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:52.732390   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:53.714304   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:56.207624   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:34:58.812452   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:34:58.706139   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:00.706249   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:01.884429   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:35:02.706310   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:04.707033   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:07.964347   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:35:11.036331   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:35:07.206329   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:09.207732   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:11.705587   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:13.705770   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:15.707361   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:17.116317   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:35:20.188315   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:35:18.207708   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:20.706658   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:26.268450   52240 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.142:22: connect: no route to host
	I0108 21:35:22.707071   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:25.205219   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:29.271907   52569 start.go:369] acquired machines lock for "default-k8s-diff-port-690577" in 3m34.93189764s
	I0108 21:35:29.271967   52569 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:35:29.271976   52569 fix.go:54] fixHost starting: 
	I0108 21:35:29.272306   52569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:35:29.272338   52569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:35:29.287287   52569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0108 21:35:29.287779   52569 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:35:29.288290   52569 main.go:141] libmachine: Using API Version  1
	I0108 21:35:29.288316   52569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:35:29.288605   52569 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:35:29.288821   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:35:29.288982   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetState
	I0108 21:35:29.290788   52569 fix.go:102] recreateIfNeeded on default-k8s-diff-port-690577: state=Stopped err=<nil>
	I0108 21:35:29.290825   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	W0108 21:35:29.291026   52569 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:35:29.293702   52569 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-690577" ...
	I0108 21:35:29.270176   52240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:35:29.270211   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:35:29.271776   52240 machine.go:91] provisioned docker machine in 4m37.373930275s
	I0108 21:35:29.271816   52240 fix.go:56] fixHost completed within 4m37.395802105s
	I0108 21:35:29.271823   52240 start.go:83] releasing machines lock for "embed-certs-930023", held for 4m37.395817172s
	W0108 21:35:29.271846   52240 start.go:694] error starting host: provision: host is not running
	W0108 21:35:29.271953   52240 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0108 21:35:29.271962   52240 start.go:709] Will try again in 5 seconds ...
	I0108 21:35:27.206006   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:29.705836   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:31.707772   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:29.295316   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .Start
	I0108 21:35:29.295509   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Ensuring networks are active...
	I0108 21:35:29.296265   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Ensuring network default is active
	I0108 21:35:29.296584   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Ensuring network mk-default-k8s-diff-port-690577 is active
	I0108 21:35:29.296940   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Getting domain xml...
	I0108 21:35:29.297545   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Creating domain...
	I0108 21:35:30.597761   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting to get IP...
	I0108 21:35:30.598763   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:30.599162   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:30.599243   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:30.599141   53277 retry.go:31] will retry after 242.082962ms: waiting for machine to come up
	I0108 21:35:30.842833   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:30.843360   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:30.843396   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:30.843311   53277 retry.go:31] will retry after 352.832473ms: waiting for machine to come up
	I0108 21:35:31.198157   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:31.198687   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:31.198720   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:31.198651   53277 retry.go:31] will retry after 417.037034ms: waiting for machine to come up
	I0108 21:35:31.617254   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:31.617768   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:31.617801   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:31.617718   53277 retry.go:31] will retry after 563.653404ms: waiting for machine to come up
	I0108 21:35:32.183396   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:32.183827   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:32.183849   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:32.183788   53277 retry.go:31] will retry after 728.898472ms: waiting for machine to come up
	I0108 21:35:32.914479   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:32.914999   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:32.915031   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:32.914938   53277 retry.go:31] will retry after 923.220272ms: waiting for machine to come up
	I0108 21:35:33.839888   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:33.840518   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:33.840564   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:33.840477   53277 retry.go:31] will retry after 1.069656104s: waiting for machine to come up
	I0108 21:35:34.273122   52240 start.go:365] acquiring machines lock for embed-certs-930023: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:35:33.712055   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:36.206215   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:34.911674   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:34.912204   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:34.912235   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:34.912168   53277 retry.go:31] will retry after 1.02039218s: waiting for machine to come up
	I0108 21:35:35.934283   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:35.934754   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:35.934780   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:35.934674   53277 retry.go:31] will retry after 1.244420822s: waiting for machine to come up
	I0108 21:35:37.180430   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:37.181114   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:37.181143   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:37.181048   53277 retry.go:31] will retry after 2.194490285s: waiting for machine to come up
	I0108 21:35:38.208207   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:40.708501   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:39.377237   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:39.377788   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:39.377820   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:39.377727   53277 retry.go:31] will retry after 2.408621382s: waiting for machine to come up
	I0108 21:35:41.789289   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:41.789770   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:41.789799   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:41.789724   53277 retry.go:31] will retry after 2.54584594s: waiting for machine to come up
	I0108 21:35:43.207264   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:45.706307   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:44.337111   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:44.337577   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | unable to find current IP address of domain default-k8s-diff-port-690577 in network mk-default-k8s-diff-port-690577
	I0108 21:35:44.337610   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | I0108 21:35:44.337523   53277 retry.go:31] will retry after 4.134233519s: waiting for machine to come up
	I0108 21:35:48.473204   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.473723   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Found IP for machine: 192.168.50.165
	I0108 21:35:48.473746   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has current primary IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.473756   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Reserving static IP address...
	I0108 21:35:48.474166   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Reserved static IP address: 192.168.50.165
	I0108 21:35:48.474193   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Waiting for SSH to be available...
	I0108 21:35:48.474220   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-690577", mac: "52:54:00:b5:45:26", ip: "192.168.50.165"} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:48.474264   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | skip adding static IP to network mk-default-k8s-diff-port-690577 - found existing host DHCP lease matching {name: "default-k8s-diff-port-690577", mac: "52:54:00:b5:45:26", ip: "192.168.50.165"}
	I0108 21:35:48.474283   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | Getting to WaitForSSH function...
	I0108 21:35:48.476529   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.476890   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:48.476932   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.477070   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | Using SSH client type: external
	I0108 21:35:48.477102   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | Using SSH private key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/default-k8s-diff-port-690577/id_rsa (-rw-------)
	I0108 21:35:48.477129   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17907-10702/.minikube/machines/default-k8s-diff-port-690577/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:35:48.477147   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | About to run SSH command:
	I0108 21:35:48.477160   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | exit 0
	I0108 21:35:48.564169   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | SSH cmd err, output: <nil>: 
	I0108 21:35:48.564566   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetConfigRaw
	I0108 21:35:48.565181   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetIP
	I0108 21:35:48.567750   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.568242   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:48.568269   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.568557   52569 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/config.json ...
	I0108 21:35:48.568742   52569 machine.go:88] provisioning docker machine ...
	I0108 21:35:48.568762   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:35:48.568993   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetMachineName
	I0108 21:35:48.569135   52569 buildroot.go:166] provisioning hostname "default-k8s-diff-port-690577"
	I0108 21:35:48.569164   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetMachineName
	I0108 21:35:48.569283   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:35:48.571523   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.571842   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:48.571872   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.571980   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHPort
	I0108 21:35:48.572168   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:48.572337   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:48.572476   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHUsername
	I0108 21:35:48.572626   52569 main.go:141] libmachine: Using SSH client type: native
	I0108 21:35:48.572991   52569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0108 21:35:48.573008   52569 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-690577 && echo "default-k8s-diff-port-690577" | sudo tee /etc/hostname
	I0108 21:35:48.701585   52569 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-690577
	
	I0108 21:35:48.701618   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:35:48.704919   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.705313   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:48.705348   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.705487   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHPort
	I0108 21:35:48.705716   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:48.705885   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:48.706063   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHUsername
	I0108 21:35:48.706227   52569 main.go:141] libmachine: Using SSH client type: native
	I0108 21:35:48.706535   52569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0108 21:35:48.706554   52569 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-690577' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-690577/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-690577' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:35:48.829397   52569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:35:48.829429   52569 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 21:35:48.829470   52569 buildroot.go:174] setting up certificates
	I0108 21:35:48.829487   52569 provision.go:83] configureAuth start
	I0108 21:35:48.829516   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetMachineName
	I0108 21:35:48.829833   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetIP
	I0108 21:35:48.832325   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.832720   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:48.832755   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.832867   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:35:48.835196   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.835492   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:48.835519   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.835670   52569 provision.go:138] copyHostCerts
	I0108 21:35:48.835739   52569 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 21:35:48.835761   52569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 21:35:48.835843   52569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 21:35:48.835964   52569 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 21:35:48.835976   52569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 21:35:48.836013   52569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 21:35:48.836110   52569 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 21:35:48.836125   52569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 21:35:48.836162   52569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 21:35:48.836236   52569 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-690577 san=[192.168.50.165 192.168.50.165 localhost 127.0.0.1 minikube default-k8s-diff-port-690577]
	I0108 21:35:48.942962   52569 provision.go:172] copyRemoteCerts
	I0108 21:35:48.943014   52569 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:35:48.943035   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:35:48.945864   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.946169   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:48.946203   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:48.946374   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHPort
	I0108 21:35:48.946555   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:48.946698   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHUsername
	I0108 21:35:48.946878   52569 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/default-k8s-diff-port-690577/id_rsa Username:docker}
	I0108 21:35:49.033990   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 21:35:49.058539   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0108 21:35:49.082471   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:35:49.107164   52569 provision.go:86] duration metric: configureAuth took 277.649691ms
	I0108 21:35:49.107205   52569 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:35:49.107416   52569 config.go:182] Loaded profile config "default-k8s-diff-port-690577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:35:49.107515   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:35:49.110472   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.110911   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:49.110941   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.111203   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHPort
	I0108 21:35:49.111415   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:49.111587   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:49.111713   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHUsername
	I0108 21:35:49.111918   52569 main.go:141] libmachine: Using SSH client type: native
	I0108 21:35:49.112408   52569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0108 21:35:49.112430   52569 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:35:49.669327   52240 start.go:369] acquired machines lock for "embed-certs-930023" in 15.396143609s
	I0108 21:35:49.669379   52240 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:35:49.669390   52240 fix.go:54] fixHost starting: 
	I0108 21:35:49.669795   52240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:35:49.669828   52240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:35:49.686672   52240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43105
	I0108 21:35:49.687159   52240 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:35:49.687749   52240 main.go:141] libmachine: Using API Version  1
	I0108 21:35:49.687773   52240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:35:49.688161   52240 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:35:49.688364   52240 main.go:141] libmachine: (embed-certs-930023) Calling .DriverName
	I0108 21:35:49.688512   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetState
	I0108 21:35:49.690143   52240 fix.go:102] recreateIfNeeded on embed-certs-930023: state=Stopped err=<nil>
	I0108 21:35:49.690167   52240 main.go:141] libmachine: (embed-certs-930023) Calling .DriverName
	W0108 21:35:49.690336   52240 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:35:49.692349   52240 out.go:177] * Restarting existing kvm2 VM for "embed-certs-930023" ...
	I0108 21:35:49.420964   52569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:35:49.421021   52569 machine.go:91] provisioned docker machine in 852.26474ms
	I0108 21:35:49.421034   52569 start.go:300] post-start starting for "default-k8s-diff-port-690577" (driver="kvm2")
	I0108 21:35:49.421047   52569 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:35:49.421069   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:35:49.421426   52569 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:35:49.421456   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:35:49.424676   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.425082   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:49.425142   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.425296   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHPort
	I0108 21:35:49.425476   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:49.425651   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHUsername
	I0108 21:35:49.425784   52569 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/default-k8s-diff-port-690577/id_rsa Username:docker}
	I0108 21:35:49.514086   52569 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:35:49.518283   52569 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:35:49.518313   52569 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 21:35:49.518396   52569 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 21:35:49.518477   52569 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 21:35:49.518562   52569 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:35:49.527189   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 21:35:49.551535   52569 start.go:303] post-start completed in 130.485168ms
	I0108 21:35:49.551564   52569 fix.go:56] fixHost completed within 20.279586202s
	I0108 21:35:49.551588   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:35:49.554212   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.554508   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:49.554557   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.554701   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHPort
	I0108 21:35:49.554918   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:49.555084   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:49.555258   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHUsername
	I0108 21:35:49.555449   52569 main.go:141] libmachine: Using SSH client type: native
	I0108 21:35:49.555783   52569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.50.165 22 <nil> <nil>}
	I0108 21:35:49.555795   52569 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:35:49.669158   52569 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704749749.647182750
	
	I0108 21:35:49.669186   52569 fix.go:206] guest clock: 1704749749.647182750
	I0108 21:35:49.669198   52569 fix.go:219] Guest: 2024-01-08 21:35:49.64718275 +0000 UTC Remote: 2024-01-08 21:35:49.551568596 +0000 UTC m=+235.371071516 (delta=95.614154ms)
	I0108 21:35:49.669221   52569 fix.go:190] guest clock delta is within tolerance: 95.614154ms
	I0108 21:35:49.669228   52569 start.go:83] releasing machines lock for "default-k8s-diff-port-690577", held for 20.397279077s
	I0108 21:35:49.669262   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:35:49.669581   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetIP
	I0108 21:35:49.672283   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.672695   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:49.672728   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.672934   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:35:49.673545   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:35:49.673741   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:35:49.673828   52569 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:35:49.673870   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:35:49.673962   52569 ssh_runner.go:195] Run: cat /version.json
	I0108 21:35:49.673982   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:35:49.676571   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.676765   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.676929   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:49.676958   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.677132   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHPort
	I0108 21:35:49.677265   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:49.677301   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:49.677352   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:49.677440   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHPort
	I0108 21:35:49.677540   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHUsername
	I0108 21:35:49.677601   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:35:49.677675   52569 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/default-k8s-diff-port-690577/id_rsa Username:docker}
	I0108 21:35:49.677727   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHUsername
	I0108 21:35:49.677860   52569 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/default-k8s-diff-port-690577/id_rsa Username:docker}
	I0108 21:35:49.762237   52569 ssh_runner.go:195] Run: systemctl --version
	I0108 21:35:49.790545   52569 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:35:49.935066   52569 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 21:35:49.941367   52569 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:35:49.941444   52569 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:35:49.957199   52569 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:35:49.957226   52569 start.go:475] detecting cgroup driver to use...
	I0108 21:35:49.957304   52569 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:35:49.975096   52569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:35:49.987315   52569 docker.go:217] disabling cri-docker service (if available) ...
	I0108 21:35:49.987385   52569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:35:50.000549   52569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:35:50.013927   52569 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:35:50.132488   52569 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:35:50.258588   52569 docker.go:233] disabling docker service ...
	I0108 21:35:50.258656   52569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:35:50.272058   52569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:35:50.283699   52569 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:35:50.401574   52569 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:35:50.517452   52569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:35:50.536379   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:35:50.559616   52569 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:35:50.559678   52569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:35:50.571375   52569 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:35:50.571441   52569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:35:50.582116   52569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:35:50.592058   52569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:35:50.602105   52569 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:35:50.614472   52569 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:35:50.624312   52569 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 21:35:50.624370   52569 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 21:35:50.636742   52569 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:35:50.646442   52569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:35:50.763686   52569 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:35:50.979003   52569 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:35:50.979072   52569 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:35:50.985834   52569 start.go:543] Will wait 60s for crictl version
	I0108 21:35:50.985894   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:35:50.989873   52569 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:35:51.042385   52569 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:35:51.042486   52569 ssh_runner.go:195] Run: crio --version
	I0108 21:35:51.099564   52569 ssh_runner.go:195] Run: crio --version
	I0108 21:35:51.166057   52569 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 21:35:49.693842   52240 main.go:141] libmachine: (embed-certs-930023) Calling .Start
	I0108 21:35:49.694029   52240 main.go:141] libmachine: (embed-certs-930023) Ensuring networks are active...
	I0108 21:35:49.694817   52240 main.go:141] libmachine: (embed-certs-930023) Ensuring network default is active
	I0108 21:35:49.695230   52240 main.go:141] libmachine: (embed-certs-930023) Ensuring network mk-embed-certs-930023 is active
	I0108 21:35:49.695711   52240 main.go:141] libmachine: (embed-certs-930023) Getting domain xml...
	I0108 21:35:49.696523   52240 main.go:141] libmachine: (embed-certs-930023) Creating domain...
	I0108 21:35:51.074593   52240 main.go:141] libmachine: (embed-certs-930023) Waiting to get IP...
	I0108 21:35:51.075764   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:35:51.076265   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:35:51.076344   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:35:51.076231   53390 retry.go:31] will retry after 199.763061ms: waiting for machine to come up
	I0108 21:35:51.277996   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:35:51.278591   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:35:51.278626   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:35:51.278542   53390 retry.go:31] will retry after 260.970684ms: waiting for machine to come up
	I0108 21:35:51.541128   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:35:51.541626   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:35:51.541658   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:35:51.541585   53390 retry.go:31] will retry after 476.756015ms: waiting for machine to come up
	I0108 21:35:47.706476   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:49.708011   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:51.709103   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:51.167624   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetIP
	I0108 21:35:51.170820   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:51.171250   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:35:51.171282   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:35:51.171469   52569 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0108 21:35:51.176643   52569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:35:51.190080   52569 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:35:51.190149   52569 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:35:51.229757   52569 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 21:35:51.229840   52569 ssh_runner.go:195] Run: which lz4
	I0108 21:35:51.233911   52569 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 21:35:51.239028   52569 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:35:51.239062   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 21:35:53.253102   52569 crio.go:444] Took 2.019224 seconds to copy over tarball
	I0108 21:35:53.253177   52569 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 21:35:52.020480   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:35:52.021162   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:35:52.021199   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:35:52.021105   53390 retry.go:31] will retry after 578.164512ms: waiting for machine to come up
	I0108 21:35:52.600989   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:35:52.601616   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:35:52.601644   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:35:52.601538   53390 retry.go:31] will retry after 498.23832ms: waiting for machine to come up
	I0108 21:35:53.101233   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:35:53.101715   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:35:53.101748   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:35:53.101669   53390 retry.go:31] will retry after 622.65297ms: waiting for machine to come up
	I0108 21:35:53.726544   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:35:53.727297   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:35:53.727330   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:35:53.727262   53390 retry.go:31] will retry after 765.678501ms: waiting for machine to come up
	I0108 21:35:54.494263   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:35:54.494819   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:35:54.494856   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:35:54.494762   53390 retry.go:31] will retry after 993.082444ms: waiting for machine to come up
	I0108 21:35:55.489112   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:35:55.489656   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:35:55.489684   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:35:55.489607   53390 retry.go:31] will retry after 1.806967718s: waiting for machine to come up
	I0108 21:35:53.709190   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:55.709861   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:56.625990   52569 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.372774419s)
	I0108 21:35:56.626024   52569 crio.go:451] Took 3.372892 seconds to extract the tarball
	I0108 21:35:56.626035   52569 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 21:35:56.669232   52569 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:35:56.725322   52569 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:35:56.725347   52569 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:35:56.725416   52569 ssh_runner.go:195] Run: crio config
	I0108 21:35:56.785660   52569 cni.go:84] Creating CNI manager for ""
	I0108 21:35:56.785691   52569 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:35:56.785725   52569 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:35:56.785750   52569 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.165 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-690577 NodeName:default-k8s-diff-port-690577 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:35:56.785933   52569 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.165
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-690577"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:35:56.786024   52569 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-690577 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-690577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0108 21:35:56.786087   52569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:35:56.795810   52569 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:35:56.795900   52569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:35:56.804941   52569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0108 21:35:56.825110   52569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:35:56.843828   52569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0108 21:35:56.865885   52569 ssh_runner.go:195] Run: grep 192.168.50.165	control-plane.minikube.internal$ /etc/hosts
	I0108 21:35:56.869950   52569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:35:56.884927   52569 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577 for IP: 192.168.50.165
	I0108 21:35:56.884965   52569 certs.go:190] acquiring lock for shared ca certs: {Name:mke01aa9d73e320a9a3907677cf29c75f0fa86d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:35:56.885152   52569 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key
	I0108 21:35:56.885214   52569 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key
	I0108 21:35:56.885303   52569 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.key
	I0108 21:35:56.885367   52569 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/apiserver.key.481af2eb
	I0108 21:35:56.885402   52569 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/proxy-client.key
	I0108 21:35:56.885507   52569 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem (1338 bytes)
	W0108 21:35:56.885534   52569 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896_empty.pem, impossibly tiny 0 bytes
	I0108 21:35:56.885542   52569 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:35:56.885565   52569 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem (1082 bytes)
	I0108 21:35:56.885586   52569 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:35:56.885609   52569 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem (1675 bytes)
	I0108 21:35:56.885649   52569 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem (1708 bytes)
	I0108 21:35:56.886259   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:35:56.915084   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0108 21:35:56.943379   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:35:56.976536   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0108 21:35:57.003251   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:35:57.028932   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 21:35:57.055967   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:35:57.082097   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:35:57.109014   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /usr/share/ca-certificates/178962.pem (1708 bytes)
	I0108 21:35:57.136824   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:35:57.163262   52569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem --> /usr/share/ca-certificates/17896.pem (1338 bytes)
	I0108 21:35:57.188437   52569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:35:57.207270   52569 ssh_runner.go:195] Run: openssl version
	I0108 21:35:57.213550   52569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17896.pem && ln -fs /usr/share/ca-certificates/17896.pem /etc/ssl/certs/17896.pem"
	I0108 21:35:57.224833   52569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17896.pem
	I0108 21:35:57.229969   52569 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 21:35:57.230031   52569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17896.pem
	I0108 21:35:57.236137   52569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17896.pem /etc/ssl/certs/51391683.0"
	I0108 21:35:57.250128   52569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178962.pem && ln -fs /usr/share/ca-certificates/178962.pem /etc/ssl/certs/178962.pem"
	I0108 21:35:57.263614   52569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178962.pem
	I0108 21:35:57.268663   52569 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 21:35:57.268722   52569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178962.pem
	I0108 21:35:57.274709   52569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178962.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:35:57.285035   52569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:35:57.297360   52569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:35:57.303159   52569 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:35:57.303227   52569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:35:57.310184   52569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:35:57.321350   52569 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:35:57.326381   52569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 21:35:57.332735   52569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 21:35:57.339166   52569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 21:35:57.345609   52569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 21:35:57.352069   52569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 21:35:57.358712   52569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 21:35:57.365278   52569 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-690577 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-690577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.165 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:35:57.365376   52569 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:35:57.365442   52569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:35:57.406201   52569 cri.go:89] found id: ""
	I0108 21:35:57.406284   52569 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:35:57.417081   52569 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 21:35:57.417104   52569 kubeadm.go:636] restartCluster start
	I0108 21:35:57.417168   52569 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:35:57.426681   52569 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:35:57.427741   52569 kubeconfig.go:92] found "default-k8s-diff-port-690577" server: "https://192.168.50.165:8444"
	I0108 21:35:57.430310   52569 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:35:57.439313   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:35:57.439373   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:35:57.451186   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:35:57.939399   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:35:57.939495   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:35:57.951841   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:35:58.439378   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:35:58.439474   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:35:58.451276   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:35:58.939673   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:35:58.939768   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:35:58.954996   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:35:57.298814   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:35:57.299311   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:35:57.299336   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:35:57.299277   53390 retry.go:31] will retry after 1.580215784s: waiting for machine to come up
	I0108 21:35:58.881333   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:35:58.881894   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:35:58.881928   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:35:58.881851   53390 retry.go:31] will retry after 2.274169191s: waiting for machine to come up
	I0108 21:36:01.157157   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:01.157746   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:36:01.157777   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:36:01.157694   53390 retry.go:31] will retry after 3.180450147s: waiting for machine to come up
	I0108 21:35:57.885268   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:00.208458   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:35:59.440213   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:35:59.440311   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:35:59.455949   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:35:59.939459   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:35:59.939538   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:35:59.955014   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:00.439563   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:00.439652   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:00.451881   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:00.939460   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:00.939536   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:00.951863   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:01.439438   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:01.439515   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:01.455419   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:01.940039   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:01.940127   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:01.952708   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:02.440239   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:02.440352   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:02.452154   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:02.939700   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:02.939795   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:02.951484   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:03.440050   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:03.440165   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:03.452570   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:03.940186   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:03.940291   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:03.955051   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:04.341342   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:04.341830   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:36:04.341860   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:36:04.341745   53390 retry.go:31] will retry after 3.745387335s: waiting for machine to come up
	I0108 21:36:02.708322   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:04.711939   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:04.439488   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:04.439561   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:04.451223   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:04.940398   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:04.940465   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:04.953132   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:05.439361   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:05.439456   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:05.451004   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:05.939541   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:05.939623   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:05.954500   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:06.440080   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:06.440174   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:06.451596   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:06.940219   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:06.940307   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:06.951349   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:07.439843   52569 api_server.go:166] Checking apiserver status ...
	I0108 21:36:07.439918   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:07.451328   52569 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:07.451359   52569 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 21:36:07.451368   52569 kubeadm.go:1135] stopping kube-system containers ...
	I0108 21:36:07.451377   52569 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 21:36:07.451421   52569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:36:07.491513   52569 cri.go:89] found id: ""
	I0108 21:36:07.491591   52569 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:36:07.507608   52569 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:07.517204   52569 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:07.517259   52569 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:07.527334   52569 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:07.527375   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:07.662341   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:08.658855   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:08.874341   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:08.960570   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:09.047041   52569 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:36:09.047131   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:08.089265   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:08.089689   52240 main.go:141] libmachine: (embed-certs-930023) DBG | unable to find current IP address of domain embed-certs-930023 in network mk-embed-certs-930023
	I0108 21:36:08.089720   52240 main.go:141] libmachine: (embed-certs-930023) DBG | I0108 21:36:08.089642   53390 retry.go:31] will retry after 4.866500467s: waiting for machine to come up
	I0108 21:36:07.205296   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:09.206835   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:11.207446   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:09.548078   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:10.047318   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:10.547893   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:11.047210   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:11.548116   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:11.574275   52569 api_server.go:72] duration metric: took 2.527237099s to wait for apiserver process to appear ...
	I0108 21:36:11.574301   52569 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:36:11.574317   52569 api_server.go:253] Checking apiserver healthz at https://192.168.50.165:8444/healthz ...
	I0108 21:36:11.574845   52569 api_server.go:269] stopped: https://192.168.50.165:8444/healthz: Get "https://192.168.50.165:8444/healthz": dial tcp 192.168.50.165:8444: connect: connection refused
	I0108 21:36:12.074535   52569 api_server.go:253] Checking apiserver healthz at https://192.168.50.165:8444/healthz ...
	I0108 21:36:12.958255   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:12.958763   52240 main.go:141] libmachine: (embed-certs-930023) Found IP for machine: 192.168.39.142
	I0108 21:36:12.958785   52240 main.go:141] libmachine: (embed-certs-930023) Reserving static IP address...
	I0108 21:36:12.958805   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has current primary IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:12.959327   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "embed-certs-930023", mac: "52:54:00:50:57:1a", ip: "192.168.39.142"} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:12.959368   52240 main.go:141] libmachine: (embed-certs-930023) Reserved static IP address: 192.168.39.142
	I0108 21:36:12.959389   52240 main.go:141] libmachine: (embed-certs-930023) DBG | skip adding static IP to network mk-embed-certs-930023 - found existing host DHCP lease matching {name: "embed-certs-930023", mac: "52:54:00:50:57:1a", ip: "192.168.39.142"}
	I0108 21:36:12.959413   52240 main.go:141] libmachine: (embed-certs-930023) DBG | Getting to WaitForSSH function...
	I0108 21:36:12.959426   52240 main.go:141] libmachine: (embed-certs-930023) Waiting for SSH to be available...
	I0108 21:36:12.961680   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:12.962041   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:12.962084   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:12.962139   52240 main.go:141] libmachine: (embed-certs-930023) DBG | Using SSH client type: external
	I0108 21:36:12.962177   52240 main.go:141] libmachine: (embed-certs-930023) DBG | Using SSH private key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/embed-certs-930023/id_rsa (-rw-------)
	I0108 21:36:12.962218   52240 main.go:141] libmachine: (embed-certs-930023) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17907-10702/.minikube/machines/embed-certs-930023/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:36:12.962244   52240 main.go:141] libmachine: (embed-certs-930023) DBG | About to run SSH command:
	I0108 21:36:12.962258   52240 main.go:141] libmachine: (embed-certs-930023) DBG | exit 0
	I0108 21:36:13.068522   52240 main.go:141] libmachine: (embed-certs-930023) DBG | SSH cmd err, output: <nil>: 
	I0108 21:36:13.068921   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetConfigRaw
	I0108 21:36:13.069737   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetIP
	I0108 21:36:13.072633   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.073063   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:13.073095   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.073380   52240 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/config.json ...
	I0108 21:36:13.073587   52240 machine.go:88] provisioning docker machine ...
	I0108 21:36:13.073616   52240 main.go:141] libmachine: (embed-certs-930023) Calling .DriverName
	I0108 21:36:13.073814   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetMachineName
	I0108 21:36:13.073969   52240 buildroot.go:166] provisioning hostname "embed-certs-930023"
	I0108 21:36:13.073988   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetMachineName
	I0108 21:36:13.074150   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:13.076812   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.077132   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:13.077176   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.077260   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHPort
	I0108 21:36:13.077441   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:13.077607   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:13.077743   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHUsername
	I0108 21:36:13.077907   52240 main.go:141] libmachine: Using SSH client type: native
	I0108 21:36:13.078214   52240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0108 21:36:13.078228   52240 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-930023 && echo "embed-certs-930023" | sudo tee /etc/hostname
	I0108 21:36:13.224178   52240 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-930023
	
	I0108 21:36:13.224242   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:13.227436   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.227890   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:13.227921   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.228140   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHPort
	I0108 21:36:13.228377   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:13.228554   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:13.228720   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHUsername
	I0108 21:36:13.228935   52240 main.go:141] libmachine: Using SSH client type: native
	I0108 21:36:13.229256   52240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0108 21:36:13.229272   52240 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-930023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-930023/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-930023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:36:13.373331   52240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:36:13.373374   52240 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 21:36:13.373420   52240 buildroot.go:174] setting up certificates
	I0108 21:36:13.373434   52240 provision.go:83] configureAuth start
	I0108 21:36:13.373450   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetMachineName
	I0108 21:36:13.373767   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetIP
	I0108 21:36:13.376670   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.377136   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:13.377169   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.377382   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:13.379650   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.380017   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:13.380052   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.380165   52240 provision.go:138] copyHostCerts
	I0108 21:36:13.380243   52240 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 21:36:13.380253   52240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 21:36:13.380306   52240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 21:36:13.380388   52240 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 21:36:13.380400   52240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 21:36:13.380420   52240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 21:36:13.380478   52240 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 21:36:13.380488   52240 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 21:36:13.380505   52240 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 21:36:13.380579   52240 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.embed-certs-930023 san=[192.168.39.142 192.168.39.142 localhost 127.0.0.1 minikube embed-certs-930023]
	I0108 21:36:13.564889   52240 provision.go:172] copyRemoteCerts
	I0108 21:36:13.564960   52240 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:36:13.564983   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:13.568373   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.568738   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:13.568760   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.569032   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHPort
	I0108 21:36:13.569261   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:13.569480   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHUsername
	I0108 21:36:13.569630   52240 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/embed-certs-930023/id_rsa Username:docker}
	I0108 21:36:13.666946   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 21:36:13.691235   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0108 21:36:13.715967   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:36:13.746240   52240 provision.go:86] duration metric: configureAuth took 372.788461ms
	I0108 21:36:13.746280   52240 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:36:13.746507   52240 config.go:182] Loaded profile config "embed-certs-930023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:36:13.746578   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:13.750165   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.750574   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:13.750621   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:13.750862   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHPort
	I0108 21:36:13.751072   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:13.751256   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:13.751417   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHUsername
	I0108 21:36:13.751564   52240 main.go:141] libmachine: Using SSH client type: native
	I0108 21:36:13.752017   52240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0108 21:36:13.752044   52240 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:36:14.120860   52240 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:36:14.120895   52240 machine.go:91] provisioned docker machine in 1.04729418s
	I0108 21:36:14.120908   52240 start.go:300] post-start starting for "embed-certs-930023" (driver="kvm2")
	I0108 21:36:14.120922   52240 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:36:14.120948   52240 main.go:141] libmachine: (embed-certs-930023) Calling .DriverName
	I0108 21:36:14.121294   52240 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:36:14.121323   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:14.123885   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:14.124252   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:14.124282   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:14.124447   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHPort
	I0108 21:36:14.124640   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:14.124817   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHUsername
	I0108 21:36:14.124960   52240 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/embed-certs-930023/id_rsa Username:docker}
	I0108 21:36:14.227137   52240 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:36:14.233652   52240 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:36:14.233683   52240 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 21:36:14.233782   52240 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 21:36:14.233889   52240 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 21:36:14.234013   52240 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:36:14.245712   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 21:36:14.270660   52240 start.go:303] post-start completed in 149.717252ms
	I0108 21:36:14.270690   52240 fix.go:56] fixHost completed within 24.6012999s
	I0108 21:36:14.270710   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:14.273755   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:14.274136   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:14.274183   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:14.274357   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHPort
	I0108 21:36:14.274570   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:14.274781   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:14.274949   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHUsername
	I0108 21:36:14.275146   52240 main.go:141] libmachine: Using SSH client type: native
	I0108 21:36:14.275444   52240 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.142 22 <nil> <nil>}
	I0108 21:36:14.275463   52240 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:36:14.409490   52240 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704749774.349094985
	
	I0108 21:36:14.409578   52240 fix.go:206] guest clock: 1704749774.349094985
	I0108 21:36:14.409594   52240 fix.go:219] Guest: 2024-01-08 21:36:14.349094985 +0000 UTC Remote: 2024-01-08 21:36:14.270694168 +0000 UTC m=+322.554890045 (delta=78.400817ms)
	I0108 21:36:14.409642   52240 fix.go:190] guest clock delta is within tolerance: 78.400817ms
	I0108 21:36:14.409652   52240 start.go:83] releasing machines lock for "embed-certs-930023", held for 24.740296674s
	I0108 21:36:14.409684   52240 main.go:141] libmachine: (embed-certs-930023) Calling .DriverName
	I0108 21:36:14.409981   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetIP
	I0108 21:36:14.412969   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:14.413258   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:14.413291   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:14.413448   52240 main.go:141] libmachine: (embed-certs-930023) Calling .DriverName
	I0108 21:36:14.413974   52240 main.go:141] libmachine: (embed-certs-930023) Calling .DriverName
	I0108 21:36:14.414160   52240 main.go:141] libmachine: (embed-certs-930023) Calling .DriverName
	I0108 21:36:14.414248   52240 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:36:14.414299   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:14.414367   52240 ssh_runner.go:195] Run: cat /version.json
	I0108 21:36:14.414387   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:14.417086   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:14.417363   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:14.417769   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:14.417837   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:14.417872   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:14.417925   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:14.418223   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHPort
	I0108 21:36:14.418403   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHPort
	I0108 21:36:14.418428   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:14.418592   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHUsername
	I0108 21:36:14.418605   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:14.418774   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHUsername
	I0108 21:36:14.418821   52240 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/embed-certs-930023/id_rsa Username:docker}
	I0108 21:36:14.418936   52240 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/embed-certs-930023/id_rsa Username:docker}
	I0108 21:36:14.535885   52240 ssh_runner.go:195] Run: systemctl --version
	I0108 21:36:14.542138   52240 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:36:14.699737   52240 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 21:36:14.707381   52240 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:36:14.707459   52240 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:36:14.726143   52240 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:36:14.726167   52240 start.go:475] detecting cgroup driver to use...
	I0108 21:36:14.726243   52240 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:36:14.740525   52240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:36:14.755324   52240 docker.go:217] disabling cri-docker service (if available) ...
	I0108 21:36:14.755388   52240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:36:14.769615   52240 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:36:14.783478   52240 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:36:14.890254   52240 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:36:15.017222   52240 docker.go:233] disabling docker service ...
	I0108 21:36:15.017351   52240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:36:15.031486   52240 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:36:15.044362   52240 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:36:15.152682   52240 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:36:15.271885   52240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:36:15.284965   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:36:15.305058   52240 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:36:15.305127   52240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:36:15.314792   52240 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:36:15.314861   52240 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:36:15.325886   52240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:36:15.335721   52240 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:36:15.348897   52240 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:36:15.362595   52240 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:36:15.371743   52240 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 21:36:15.371822   52240 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 21:36:15.384875   52240 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:36:15.396057   52240 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:36:15.512519   52240 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:36:15.705518   52240 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:36:15.705641   52240 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:36:15.711339   52240 start.go:543] Will wait 60s for crictl version
	I0108 21:36:15.711401   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:36:15.716564   52240 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:36:15.767223   52240 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:36:15.767323   52240 ssh_runner.go:195] Run: crio --version
	I0108 21:36:15.810600   52240 ssh_runner.go:195] Run: crio --version
	I0108 21:36:15.866745   52240 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0108 21:36:15.868012   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetIP
	I0108 21:36:15.870700   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:15.871110   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:15.871131   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:15.871361   52240 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 21:36:15.876032   52240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:36:15.889825   52240 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:36:15.889901   52240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:36:15.939547   52240 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 21:36:15.939624   52240 ssh_runner.go:195] Run: which lz4
	I0108 21:36:15.943880   52240 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 21:36:15.949395   52240 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:36:15.949448   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0108 21:36:13.708402   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:15.709407   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:15.502381   52569 api_server.go:279] https://192.168.50.165:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:36:15.502409   52569 api_server.go:103] status: https://192.168.50.165:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:36:15.502425   52569 api_server.go:253] Checking apiserver healthz at https://192.168.50.165:8444/healthz ...
	I0108 21:36:15.550748   52569 api_server.go:279] https://192.168.50.165:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:36:15.550793   52569 api_server.go:103] status: https://192.168.50.165:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:36:15.574925   52569 api_server.go:253] Checking apiserver healthz at https://192.168.50.165:8444/healthz ...
	I0108 21:36:15.604784   52569 api_server.go:279] https://192.168.50.165:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:36:15.604819   52569 api_server.go:103] status: https://192.168.50.165:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:36:16.075370   52569 api_server.go:253] Checking apiserver healthz at https://192.168.50.165:8444/healthz ...
	I0108 21:36:16.081770   52569 api_server.go:279] https://192.168.50.165:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:36:16.081800   52569 api_server.go:103] status: https://192.168.50.165:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:36:16.575395   52569 api_server.go:253] Checking apiserver healthz at https://192.168.50.165:8444/healthz ...
	I0108 21:36:16.596626   52569 api_server.go:279] https://192.168.50.165:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:36:16.596674   52569 api_server.go:103] status: https://192.168.50.165:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:36:17.075208   52569 api_server.go:253] Checking apiserver healthz at https://192.168.50.165:8444/healthz ...
	I0108 21:36:17.081925   52569 api_server.go:279] https://192.168.50.165:8444/healthz returned 200:
	ok
	I0108 21:36:17.091302   52569 api_server.go:141] control plane version: v1.28.4
	I0108 21:36:17.091337   52569 api_server.go:131] duration metric: took 5.517028575s to wait for apiserver health ...
	I0108 21:36:17.091349   52569 cni.go:84] Creating CNI manager for ""
	I0108 21:36:17.091358   52569 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:36:17.093574   52569 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 21:36:17.095097   52569 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 21:36:17.129876   52569 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 21:36:17.179703   52569 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:36:17.199056   52569 system_pods.go:59] 8 kube-system pods found
	I0108 21:36:17.199106   52569 system_pods.go:61] "coredns-5dd5756b68-92m44" [048c7bfa-ea87-4f91-b002-c30fe11cac2a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 21:36:17.199118   52569 system_pods.go:61] "etcd-default-k8s-diff-port-690577" [4fd93437-1a2a-499b-8266-21530044d7b0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:36:17.199130   52569 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-690577" [84e50b6e-165c-4fb9-9127-c6ec504a23b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 21:36:17.199144   52569 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-690577" [2419d5e1-1b44-4bce-a603-99d1e64547ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:36:17.199154   52569 system_pods.go:61] "kube-proxy-qzxt5" [89e4ed5e-f9af-4a21-b744-73f9a3c4deda] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:36:17.199174   52569 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-690577" [fd74bf90-bef0-4a31-86dd-6999f46bc2e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:36:17.199183   52569 system_pods.go:61] "metrics-server-57f55c9bc5-46dvw" [6c095070-fdfd-4d65-b0b4-b4c234fad85d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:36:17.199197   52569 system_pods.go:61] "storage-provisioner" [69c923fb-6414-4802-9420-c02694250e2d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 21:36:17.199206   52569 system_pods.go:74] duration metric: took 19.479854ms to wait for pod list to return data ...
	I0108 21:36:17.199221   52569 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:36:17.203911   52569 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:36:17.203946   52569 node_conditions.go:123] node cpu capacity is 2
	I0108 21:36:17.203959   52569 node_conditions.go:105] duration metric: took 4.732104ms to run NodePressure ...
	I0108 21:36:17.203980   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:17.563686   52569 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 21:36:17.578757   52569 kubeadm.go:787] kubelet initialised
	I0108 21:36:17.578779   52569 kubeadm.go:788] duration metric: took 15.05982ms waiting for restarted kubelet to initialise ...
	I0108 21:36:17.578786   52569 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:17.601816   52569 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-92m44" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:17.618624   52569 pod_ready.go:97] node "default-k8s-diff-port-690577" hosting pod "coredns-5dd5756b68-92m44" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:17.618663   52569 pod_ready.go:81] duration metric: took 16.813021ms waiting for pod "coredns-5dd5756b68-92m44" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:17.618676   52569 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-690577" hosting pod "coredns-5dd5756b68-92m44" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:17.618686   52569 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:17.651493   52569 pod_ready.go:97] node "default-k8s-diff-port-690577" hosting pod "etcd-default-k8s-diff-port-690577" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:17.651525   52569 pod_ready.go:81] duration metric: took 32.828719ms waiting for pod "etcd-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:17.651540   52569 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-690577" hosting pod "etcd-default-k8s-diff-port-690577" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:17.651549   52569 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:17.663176   52569 pod_ready.go:97] node "default-k8s-diff-port-690577" hosting pod "kube-apiserver-default-k8s-diff-port-690577" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:17.663208   52569 pod_ready.go:81] duration metric: took 11.650371ms waiting for pod "kube-apiserver-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:17.663220   52569 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-690577" hosting pod "kube-apiserver-default-k8s-diff-port-690577" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:17.663226   52569 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:17.671121   52569 pod_ready.go:97] node "default-k8s-diff-port-690577" hosting pod "kube-controller-manager-default-k8s-diff-port-690577" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:17.671153   52569 pod_ready.go:81] duration metric: took 7.918398ms waiting for pod "kube-controller-manager-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:17.671175   52569 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-690577" hosting pod "kube-controller-manager-default-k8s-diff-port-690577" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:17.671185   52569 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qzxt5" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:17.984981   52569 pod_ready.go:97] node "default-k8s-diff-port-690577" hosting pod "kube-proxy-qzxt5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:17.985009   52569 pod_ready.go:81] duration metric: took 313.815942ms waiting for pod "kube-proxy-qzxt5" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:17.985020   52569 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-690577" hosting pod "kube-proxy-qzxt5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:17.985026   52569 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:18.385683   52569 pod_ready.go:97] node "default-k8s-diff-port-690577" hosting pod "kube-scheduler-default-k8s-diff-port-690577" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:18.385720   52569 pod_ready.go:81] duration metric: took 400.684253ms waiting for pod "kube-scheduler-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:18.385731   52569 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-690577" hosting pod "kube-scheduler-default-k8s-diff-port-690577" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:18.385737   52569 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:18.784941   52569 pod_ready.go:97] node "default-k8s-diff-port-690577" hosting pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:18.784977   52569 pod_ready.go:81] duration metric: took 399.232051ms waiting for pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:18.784995   52569 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-690577" hosting pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:18.785004   52569 pod_ready.go:38] duration metric: took 1.206207934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:18.785024   52569 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:18.799070   52569 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:18.799107   52569 kubeadm.go:640] restartCluster took 21.381990245s
	I0108 21:36:18.799118   52569 kubeadm.go:406] StartCluster complete in 21.433851658s
	I0108 21:36:18.799138   52569 settings.go:142] acquiring lock: {Name:mk91d3baf51872e4bb0758b94fca7c7249bb9666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:18.799235   52569 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:36:18.801750   52569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/kubeconfig: {Name:mkeb2e8a20e31c0c2d5c7e8214a27af3141300ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:18.802096   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:36:18.802155   52569 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:36:18.802245   52569 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-690577"
	I0108 21:36:18.802268   52569 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-690577"
	I0108 21:36:18.802300   52569 addons.go:237] Setting addon storage-provisioner=true in "default-k8s-diff-port-690577"
	I0108 21:36:18.802302   52569 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-690577"
	W0108 21:36:18.802311   52569 addons.go:246] addon storage-provisioner should already be in state true
	I0108 21:36:18.802329   52569 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-690577"
	I0108 21:36:18.802361   52569 host.go:66] Checking if "default-k8s-diff-port-690577" exists ...
	I0108 21:36:18.802366   52569 addons.go:237] Setting addon metrics-server=true in "default-k8s-diff-port-690577"
	W0108 21:36:18.802377   52569 addons.go:246] addon metrics-server should already be in state true
	I0108 21:36:18.802385   52569 config.go:182] Loaded profile config "default-k8s-diff-port-690577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:36:18.802458   52569 host.go:66] Checking if "default-k8s-diff-port-690577" exists ...
	I0108 21:36:18.802734   52569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:18.802761   52569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:18.802772   52569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:18.802852   52569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:18.802950   52569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:18.803074   52569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:18.808829   52569 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-690577" context rescaled to 1 replicas
	I0108 21:36:18.808871   52569 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.165 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:36:18.812151   52569 out.go:177] * Verifying Kubernetes components...
	I0108 21:36:18.813566   52569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:18.821671   52569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0108 21:36:18.821722   52569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46059
	I0108 21:36:18.821831   52569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0108 21:36:18.822316   52569 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:18.822653   52569 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:18.822874   52569 main.go:141] libmachine: Using API Version  1
	I0108 21:36:18.822899   52569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:18.822912   52569 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:18.823235   52569 main.go:141] libmachine: Using API Version  1
	I0108 21:36:18.823254   52569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:18.823299   52569 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:18.823462   52569 main.go:141] libmachine: Using API Version  1
	I0108 21:36:18.823471   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetState
	I0108 21:36:18.823488   52569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:18.823755   52569 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:18.823826   52569 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:18.824400   52569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:18.824437   52569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:18.825200   52569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:18.825239   52569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:18.827014   52569 addons.go:237] Setting addon default-storageclass=true in "default-k8s-diff-port-690577"
	W0108 21:36:18.827571   52569 addons.go:246] addon default-storageclass should already be in state true
	I0108 21:36:18.827618   52569 host.go:66] Checking if "default-k8s-diff-port-690577" exists ...
	I0108 21:36:18.828348   52569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:18.828431   52569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:18.845040   52569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I0108 21:36:18.845692   52569 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:18.846316   52569 main.go:141] libmachine: Using API Version  1
	I0108 21:36:18.846337   52569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:18.846894   52569 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:18.847138   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetState
	I0108 21:36:18.848767   52569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0108 21:36:18.849144   52569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46335
	I0108 21:36:18.849322   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:36:18.849503   52569 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:18.849631   52569 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:18.849918   52569 main.go:141] libmachine: Using API Version  1
	I0108 21:36:18.849935   52569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:18.850233   52569 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:18.850753   52569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:18.850801   52569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:18.851193   52569 main.go:141] libmachine: Using API Version  1
	I0108 21:36:18.851204   52569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:18.851512   52569 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:18.851746   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetState
	I0108 21:36:18.858606   52569 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 21:36:18.860239   52569 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:36:18.860260   52569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:36:18.860286   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:36:18.860929   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:36:18.862615   52569 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:36:18.864273   52569 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:18.864288   52569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:36:18.864305   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:36:18.863495   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:36:18.864405   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHPort
	I0108 21:36:18.864445   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:36:18.864464   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:36:18.864613   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:36:18.864778   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHUsername
	I0108 21:36:18.864890   52569 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/default-k8s-diff-port-690577/id_rsa Username:docker}
	I0108 21:36:18.867932   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:36:18.868405   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:36:18.868472   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:36:18.868723   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHPort
	I0108 21:36:18.868935   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:36:18.869093   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHUsername
	I0108 21:36:18.869247   52569 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/default-k8s-diff-port-690577/id_rsa Username:docker}
	I0108 21:36:18.871749   52569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0108 21:36:18.872122   52569 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:18.872713   52569 main.go:141] libmachine: Using API Version  1
	I0108 21:36:18.872729   52569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:18.873131   52569 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:18.873320   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetState
	I0108 21:36:18.875475   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .DriverName
	I0108 21:36:18.875715   52569 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:18.875726   52569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:36:18.875743   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHHostname
	I0108 21:36:18.879388   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:36:18.879427   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:45:26", ip: ""} in network mk-default-k8s-diff-port-690577: {Iface:virbr4 ExpiryTime:2024-01-08 22:27:43 +0000 UTC Type:0 Mac:52:54:00:b5:45:26 Iaid: IPaddr:192.168.50.165 Prefix:24 Hostname:default-k8s-diff-port-690577 Clientid:01:52:54:00:b5:45:26}
	I0108 21:36:18.879444   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | domain default-k8s-diff-port-690577 has defined IP address 192.168.50.165 and MAC address 52:54:00:b5:45:26 in network mk-default-k8s-diff-port-690577
	I0108 21:36:18.879644   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHPort
	I0108 21:36:18.879878   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHKeyPath
	I0108 21:36:18.880003   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .GetSSHUsername
	I0108 21:36:18.880187   52569 sshutil.go:53] new ssh client: &{IP:192.168.50.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/default-k8s-diff-port-690577/id_rsa Username:docker}
	I0108 21:36:18.988336   52569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:19.043285   52569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:19.075999   52569 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:36:19.076020   52569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 21:36:19.135751   52569 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:36:19.135784   52569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:36:19.185116   52569 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-690577" to be "Ready" ...
	I0108 21:36:19.185235   52569 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 21:36:19.220065   52569 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:19.220104   52569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:36:19.305634   52569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:21.057308   52569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.013982838s)
	I0108 21:36:21.057350   52569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.068968743s)
	I0108 21:36:21.057371   52569 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:21.057384   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .Close
	I0108 21:36:21.057384   52569 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:21.057398   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .Close
	I0108 21:36:21.057876   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | Closing plugin on server side
	I0108 21:36:21.057919   52569 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:21.057929   52569 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:21.057938   52569 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:21.057947   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .Close
	I0108 21:36:21.058045   52569 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:21.058055   52569 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:21.058064   52569 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:21.058072   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .Close
	I0108 21:36:21.058381   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | Closing plugin on server side
	I0108 21:36:21.058418   52569 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:21.058427   52569 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:21.058532   52569 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:21.058548   52569 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:21.074228   52569 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:21.074259   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .Close
	I0108 21:36:21.076243   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | Closing plugin on server side
	I0108 21:36:21.076243   52569 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:21.076295   52569 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:21.161312   52569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.855618838s)
	I0108 21:36:21.161380   52569 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:21.161395   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .Close
	I0108 21:36:21.161934   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | Closing plugin on server side
	I0108 21:36:21.161985   52569 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:21.161995   52569 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:21.162004   52569 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:21.162019   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) Calling .Close
	I0108 21:36:21.162297   52569 main.go:141] libmachine: (default-k8s-diff-port-690577) DBG | Closing plugin on server side
	I0108 21:36:21.162338   52569 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:21.162364   52569 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:21.162376   52569 addons.go:473] Verifying addon metrics-server=true in "default-k8s-diff-port-690577"
	I0108 21:36:21.193723   52569 node_ready.go:58] node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:21.294830   52569 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0108 21:36:17.937274   52240 crio.go:444] Took 1.993445 seconds to copy over tarball
	I0108 21:36:17.937396   52240 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 21:36:21.729519   52240 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.792076252s)
	I0108 21:36:21.729544   52240 crio.go:451] Took 3.792243 seconds to extract the tarball
	I0108 21:36:21.729553   52240 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 21:36:17.709964   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:20.208834   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:21.593738   52569 addons.go:508] enable addons completed in 2.791578557s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0108 21:36:23.689138   52569 node_ready.go:58] node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:21.773779   52240 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:36:21.892592   52240 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:36:21.892615   52240 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:36:21.892676   52240 ssh_runner.go:195] Run: crio config
	I0108 21:36:21.958788   52240 cni.go:84] Creating CNI manager for ""
	I0108 21:36:21.958813   52240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:36:21.958830   52240 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:36:21.958850   52240 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.142 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-930023 NodeName:embed-certs-930023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:36:21.959016   52240 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-930023"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:36:21.959147   52240 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-930023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-930023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:36:21.959220   52240 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:36:21.969891   52240 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:36:21.969977   52240 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:36:21.979591   52240 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0108 21:36:22.000000   52240 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:36:22.018355   52240 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0108 21:36:22.037233   52240 ssh_runner.go:195] Run: grep 192.168.39.142	control-plane.minikube.internal$ /etc/hosts
	I0108 21:36:22.041194   52240 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:36:22.054922   52240 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023 for IP: 192.168.39.142
	I0108 21:36:22.054953   52240 certs.go:190] acquiring lock for shared ca certs: {Name:mke01aa9d73e320a9a3907677cf29c75f0fa86d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:22.055116   52240 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key
	I0108 21:36:22.055197   52240 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key
	I0108 21:36:22.055311   52240 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/client.key
	I0108 21:36:22.055386   52240 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/apiserver.key.4bb0a69b
	I0108 21:36:22.055444   52240 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/proxy-client.key
	I0108 21:36:22.055593   52240 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem (1338 bytes)
	W0108 21:36:22.055632   52240 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896_empty.pem, impossibly tiny 0 bytes
	I0108 21:36:22.055648   52240 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:36:22.055693   52240 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem (1082 bytes)
	I0108 21:36:22.055726   52240 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:36:22.055763   52240 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem (1675 bytes)
	I0108 21:36:22.055832   52240 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem (1708 bytes)
	I0108 21:36:22.056512   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:36:22.083133   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:36:22.111374   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:36:22.136788   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/embed-certs-930023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:36:22.163341   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:36:22.192544   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 21:36:22.220175   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:36:22.249432   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:36:22.276929   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /usr/share/ca-certificates/178962.pem (1708 bytes)
	I0108 21:36:22.302424   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:36:22.332103   52240 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem --> /usr/share/ca-certificates/17896.pem (1338 bytes)
	I0108 21:36:22.357610   52240 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:36:22.377539   52240 ssh_runner.go:195] Run: openssl version
	I0108 21:36:22.383338   52240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178962.pem && ln -fs /usr/share/ca-certificates/178962.pem /etc/ssl/certs/178962.pem"
	I0108 21:36:22.394450   52240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178962.pem
	I0108 21:36:22.399586   52240 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 21:36:22.399731   52240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178962.pem
	I0108 21:36:22.406746   52240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178962.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:36:22.417874   52240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:36:22.428206   52240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:36:22.433527   52240 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:36:22.433589   52240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:36:22.439240   52240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:36:22.450095   52240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17896.pem && ln -fs /usr/share/ca-certificates/17896.pem /etc/ssl/certs/17896.pem"
	I0108 21:36:22.460785   52240 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17896.pem
	I0108 21:36:22.465661   52240 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 21:36:22.465746   52240 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17896.pem
	I0108 21:36:22.472065   52240 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17896.pem /etc/ssl/certs/51391683.0"
	I0108 21:36:22.485767   52240 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:36:22.491047   52240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 21:36:22.497845   52240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 21:36:22.504131   52240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 21:36:22.510528   52240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 21:36:22.517151   52240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 21:36:22.523032   52240 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 21:36:22.529019   52240 kubeadm.go:404] StartCluster: {Name:embed-certs-930023 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-930023 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:36:22.529115   52240 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:36:22.529174   52240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:36:22.571237   52240 cri.go:89] found id: ""
	I0108 21:36:22.571296   52240 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:36:22.581728   52240 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0108 21:36:22.581752   52240 kubeadm.go:636] restartCluster start
	I0108 21:36:22.581807   52240 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0108 21:36:22.590934   52240 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:22.592208   52240 kubeconfig.go:92] found "embed-certs-930023" server: "https://192.168.39.142:8443"
	I0108 21:36:22.595407   52240 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0108 21:36:22.604699   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:22.604755   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:22.616689   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:23.104892   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:23.104966   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:23.117107   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:23.605420   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:23.605524   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:23.616939   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:24.105285   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:24.105368   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:24.118949   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:24.605565   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:24.605680   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:24.619756   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:25.105775   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:25.105870   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:25.121865   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:25.605265   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:25.605340   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:25.617160   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:26.105777   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:26.105866   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:26.120991   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:26.605284   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:26.605401   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:26.618563   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:22.247724   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:24.705673   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:25.690989   52569 node_ready.go:58] node "default-k8s-diff-port-690577" has status "Ready":"False"
	I0108 21:36:26.190531   52569 node_ready.go:49] node "default-k8s-diff-port-690577" has status "Ready":"True"
	I0108 21:36:26.190556   52569 node_ready.go:38] duration metric: took 7.005407883s waiting for node "default-k8s-diff-port-690577" to be "Ready" ...
	I0108 21:36:26.190565   52569 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:26.196663   52569 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-92m44" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:26.202672   52569 pod_ready.go:92] pod "coredns-5dd5756b68-92m44" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:26.202702   52569 pod_ready.go:81] duration metric: took 6.013716ms waiting for pod "coredns-5dd5756b68-92m44" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:26.202716   52569 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:26.208229   52569 pod_ready.go:92] pod "etcd-default-k8s-diff-port-690577" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:26.208251   52569 pod_ready.go:81] duration metric: took 5.526529ms waiting for pod "etcd-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:26.208264   52569 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:26.213633   52569 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-690577" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:26.213658   52569 pod_ready.go:81] duration metric: took 5.385314ms waiting for pod "kube-apiserver-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:26.213669   52569 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:28.227159   52569 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-690577" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:27.104826   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:27.104953   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:27.117218   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:27.604777   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:27.604915   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:27.617278   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:28.104782   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:28.104855   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:28.117742   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:28.605256   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:28.605351   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:28.619383   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:29.104923   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:29.105008   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:29.117453   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:29.604844   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:29.604930   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:29.617295   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:30.105434   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:30.105555   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:30.117593   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:30.605155   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:30.605241   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:30.617278   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:31.104769   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:31.104863   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:31.116507   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:31.605157   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:31.605239   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:31.617324   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:27.209393   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:29.706734   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:31.707327   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:30.230479   52569 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-690577" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:30.230503   52569 pod_ready.go:81] duration metric: took 4.016825214s waiting for pod "kube-controller-manager-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:30.230513   52569 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qzxt5" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:30.237588   52569 pod_ready.go:92] pod "kube-proxy-qzxt5" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:30.237608   52569 pod_ready.go:81] duration metric: took 7.089232ms waiting for pod "kube-proxy-qzxt5" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:30.237616   52569 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:30.245081   52569 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-690577" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:30.245106   52569 pod_ready.go:81] duration metric: took 7.48269ms waiting for pod "kube-scheduler-default-k8s-diff-port-690577" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:30.245120   52569 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:32.254067   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:32.105705   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:32.105812   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:32.118300   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:32.604851   52240 api_server.go:166] Checking apiserver status ...
	I0108 21:36:32.604950   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0108 21:36:32.616970   52240 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0108 21:36:32.617000   52240 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0108 21:36:32.617032   52240 kubeadm.go:1135] stopping kube-system containers ...
	I0108 21:36:32.617044   52240 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0108 21:36:32.617111   52240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:36:32.666476   52240 cri.go:89] found id: ""
	I0108 21:36:32.666559   52240 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0108 21:36:32.682685   52240 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:36:32.692168   52240 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:36:32.692235   52240 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:32.702936   52240 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0108 21:36:32.702956   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:32.837546   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:33.330725   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:33.535418   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:33.627267   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:33.711597   52240 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:36:33.711686   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:34.211851   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:34.712612   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:35.212630   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:35.712759   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:36.211839   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:36:36.233830   52240 api_server.go:72] duration metric: took 2.522231565s to wait for apiserver process to appear ...
	I0108 21:36:36.233858   52240 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:36:36.233875   52240 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0108 21:36:36.234420   52240 api_server.go:269] stopped: https://192.168.39.142:8443/healthz: Get "https://192.168.39.142:8443/healthz": dial tcp 192.168.39.142:8443: connect: connection refused
	I0108 21:36:36.733943   52240 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0108 21:36:33.707749   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:36.209726   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:34.754148   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:37.254297   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:39.408014   52240 api_server.go:279] https://192.168.39.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:36:39.408048   52240 api_server.go:103] status: https://192.168.39.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:36:39.408065   52240 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0108 21:36:39.445025   52240 api_server.go:279] https://192.168.39.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0108 21:36:39.445061   52240 api_server.go:103] status: https://192.168.39.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0108 21:36:39.734865   52240 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0108 21:36:39.740816   52240 api_server.go:279] https://192.168.39.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:36:39.740874   52240 api_server.go:103] status: https://192.168.39.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:36:40.234304   52240 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0108 21:36:40.240558   52240 api_server.go:279] https://192.168.39.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0108 21:36:40.240593   52240 api_server.go:103] status: https://192.168.39.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0108 21:36:40.734103   52240 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0108 21:36:40.739641   52240 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0108 21:36:40.761154   52240 api_server.go:141] control plane version: v1.28.4
	I0108 21:36:40.761194   52240 api_server.go:131] duration metric: took 4.527328192s to wait for apiserver health ...
	I0108 21:36:40.761206   52240 cni.go:84] Creating CNI manager for ""
	I0108 21:36:40.761214   52240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:36:40.763287   52240 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 21:36:40.764948   52240 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 21:36:40.781213   52240 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 21:36:40.815058   52240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:36:40.826694   52240 system_pods.go:59] 8 kube-system pods found
	I0108 21:36:40.826726   52240 system_pods.go:61] "coredns-5dd5756b68-jlpx5" [a3128151-c8ce-44da-a192-3b4a2ae1e3f8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0108 21:36:40.826732   52240 system_pods.go:61] "etcd-embed-certs-930023" [392e8e69-7cd2-4346-aa55-887d736dfc01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0108 21:36:40.826740   52240 system_pods.go:61] "kube-apiserver-embed-certs-930023" [98bd475f-c413-40c0-b99c-fdcc29687925] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0108 21:36:40.826746   52240 system_pods.go:61] "kube-controller-manager-embed-certs-930023" [31dd08df-27c2-4ed0-8c42-03ff09294e06] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0108 21:36:40.826752   52240 system_pods.go:61] "kube-proxy-8qs2r" [ed301cf2-3f54-4b4c-880b-2fe829c81093] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0108 21:36:40.826758   52240 system_pods.go:61] "kube-scheduler-embed-certs-930023" [3041f9c9-d48b-4910-90ca-127f4b9e2485] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0108 21:36:40.826764   52240 system_pods.go:61] "metrics-server-57f55c9bc5-rj499" [5873675f-8a6c-4404-be01-b46763a62f5c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:36:40.826773   52240 system_pods.go:61] "storage-provisioner" [1ef46fa1-8048-4f26-b999-6b78c5450cb8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0108 21:36:40.826780   52240 system_pods.go:74] duration metric: took 11.695241ms to wait for pod list to return data ...
	I0108 21:36:40.826787   52240 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:36:40.830728   52240 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:36:40.830774   52240 node_conditions.go:123] node cpu capacity is 2
	I0108 21:36:40.830788   52240 node_conditions.go:105] duration metric: took 3.996036ms to run NodePressure ...
	I0108 21:36:40.830809   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0108 21:36:41.195927   52240 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0108 21:36:41.209035   52240 kubeadm.go:787] kubelet initialised
	I0108 21:36:41.209059   52240 kubeadm.go:788] duration metric: took 13.105004ms waiting for restarted kubelet to initialise ...
	I0108 21:36:41.209068   52240 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:41.217293   52240 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jlpx5" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:41.225320   52240 pod_ready.go:97] node "embed-certs-930023" hosting pod "coredns-5dd5756b68-jlpx5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:41.225360   52240 pod_ready.go:81] duration metric: took 8.02529ms waiting for pod "coredns-5dd5756b68-jlpx5" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.225372   52240 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-930023" hosting pod "coredns-5dd5756b68-jlpx5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:41.225381   52240 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:41.243966   52240 pod_ready.go:97] node "embed-certs-930023" hosting pod "etcd-embed-certs-930023" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:41.244001   52240 pod_ready.go:81] duration metric: took 18.608025ms waiting for pod "etcd-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.244012   52240 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-930023" hosting pod "etcd-embed-certs-930023" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:41.244018   52240 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:41.256236   52240 pod_ready.go:97] node "embed-certs-930023" hosting pod "kube-apiserver-embed-certs-930023" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:41.256262   52240 pod_ready.go:81] duration metric: took 12.233663ms waiting for pod "kube-apiserver-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.256275   52240 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-930023" hosting pod "kube-apiserver-embed-certs-930023" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:41.256284   52240 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:41.263909   52240 pod_ready.go:97] node "embed-certs-930023" hosting pod "kube-controller-manager-embed-certs-930023" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:41.263936   52240 pod_ready.go:81] duration metric: took 7.640667ms waiting for pod "kube-controller-manager-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.263951   52240 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-930023" hosting pod "kube-controller-manager-embed-certs-930023" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:41.263960   52240 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8qs2r" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:41.620048   52240 pod_ready.go:97] node "embed-certs-930023" hosting pod "kube-proxy-8qs2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:41.620076   52240 pod_ready.go:81] duration metric: took 356.10182ms waiting for pod "kube-proxy-8qs2r" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:41.620087   52240 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-930023" hosting pod "kube-proxy-8qs2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:41.620113   52240 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:38.707472   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:41.208597   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:42.019576   52240 pod_ready.go:97] node "embed-certs-930023" hosting pod "kube-scheduler-embed-certs-930023" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:42.019602   52240 pod_ready.go:81] duration metric: took 399.481001ms waiting for pod "kube-scheduler-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:42.019611   52240 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-930023" hosting pod "kube-scheduler-embed-certs-930023" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:42.019617   52240 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:42.419317   52240 pod_ready.go:97] node "embed-certs-930023" hosting pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:42.419346   52240 pod_ready.go:81] duration metric: took 399.719557ms waiting for pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace to be "Ready" ...
	E0108 21:36:42.419357   52240 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-930023" hosting pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:42.419370   52240 pod_ready.go:38] duration metric: took 1.210282388s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:42.419390   52240 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:36:42.431666   52240 ops.go:34] apiserver oom_adj: -16
	I0108 21:36:42.431691   52240 kubeadm.go:640] restartCluster took 19.849932122s
	I0108 21:36:42.431700   52240 kubeadm.go:406] StartCluster complete in 19.902687486s
	I0108 21:36:42.431719   52240 settings.go:142] acquiring lock: {Name:mk91d3baf51872e4bb0758b94fca7c7249bb9666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:42.431821   52240 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:36:42.433943   52240 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/kubeconfig: {Name:mkeb2e8a20e31c0c2d5c7e8214a27af3141300ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:36:42.434160   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:36:42.434284   52240 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:36:42.434389   52240 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-930023"
	I0108 21:36:42.434397   52240 config.go:182] Loaded profile config "embed-certs-930023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:36:42.434405   52240 addons.go:69] Setting default-storageclass=true in profile "embed-certs-930023"
	I0108 21:36:42.434417   52240 addons.go:237] Setting addon storage-provisioner=true in "embed-certs-930023"
	I0108 21:36:42.434427   52240 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-930023"
	W0108 21:36:42.434447   52240 addons.go:246] addon storage-provisioner should already be in state true
	I0108 21:36:42.434468   52240 addons.go:69] Setting metrics-server=true in profile "embed-certs-930023"
	I0108 21:36:42.434504   52240 host.go:66] Checking if "embed-certs-930023" exists ...
	I0108 21:36:42.434508   52240 addons.go:237] Setting addon metrics-server=true in "embed-certs-930023"
	W0108 21:36:42.434519   52240 addons.go:246] addon metrics-server should already be in state true
	I0108 21:36:42.434604   52240 host.go:66] Checking if "embed-certs-930023" exists ...
	I0108 21:36:42.434881   52240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:42.434885   52240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:42.434908   52240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:42.434910   52240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:42.435060   52240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:42.435096   52240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:42.439427   52240 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-930023" context rescaled to 1 replicas
	I0108 21:36:42.439462   52240 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.142 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:36:42.441835   52240 out.go:177] * Verifying Kubernetes components...
	I0108 21:36:42.443285   52240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:36:42.452018   52240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46509
	I0108 21:36:42.452555   52240 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:42.453193   52240 main.go:141] libmachine: Using API Version  1
	I0108 21:36:42.453217   52240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:42.453346   52240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42421
	I0108 21:36:42.453595   52240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45557
	I0108 21:36:42.453768   52240 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:42.453895   52240 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:42.454055   52240 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:42.454396   52240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:42.454434   52240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:42.454482   52240 main.go:141] libmachine: Using API Version  1
	I0108 21:36:42.454496   52240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:42.454656   52240 main.go:141] libmachine: Using API Version  1
	I0108 21:36:42.454690   52240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:42.455049   52240 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:42.455094   52240 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:42.455300   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetState
	I0108 21:36:42.455638   52240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:42.455698   52240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:42.459200   52240 addons.go:237] Setting addon default-storageclass=true in "embed-certs-930023"
	W0108 21:36:42.459230   52240 addons.go:246] addon default-storageclass should already be in state true
	I0108 21:36:42.459257   52240 host.go:66] Checking if "embed-certs-930023" exists ...
	I0108 21:36:42.459613   52240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:42.459653   52240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:42.471771   52240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44427
	I0108 21:36:42.471800   52240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0108 21:36:42.472249   52240 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:42.472385   52240 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:42.472942   52240 main.go:141] libmachine: Using API Version  1
	I0108 21:36:42.472964   52240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:42.473067   52240 main.go:141] libmachine: Using API Version  1
	I0108 21:36:42.473094   52240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:42.473646   52240 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:42.473677   52240 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:42.473832   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetState
	I0108 21:36:42.474158   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetState
	I0108 21:36:42.475792   52240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
	I0108 21:36:42.475899   52240 main.go:141] libmachine: (embed-certs-930023) Calling .DriverName
	I0108 21:36:42.476141   52240 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:42.478431   52240 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0108 21:36:42.476479   52240 main.go:141] libmachine: (embed-certs-930023) Calling .DriverName
	I0108 21:36:42.476559   52240 main.go:141] libmachine: Using API Version  1
	I0108 21:36:42.479930   52240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:42.480011   52240 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 21:36:42.480019   52240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 21:36:42.480030   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:42.480585   52240 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:42.482348   52240 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:36:42.481509   52240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:36:42.483768   52240 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:42.483777   52240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:36:42.483783   52240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:36:42.483802   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:42.483885   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHPort
	I0108 21:36:42.483318   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:42.483948   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:42.483998   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:42.484067   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:42.484229   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHUsername
	I0108 21:36:42.484397   52240 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/embed-certs-930023/id_rsa Username:docker}
	I0108 21:36:42.487723   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:42.488532   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:42.488572   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:42.488912   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHPort
	I0108 21:36:42.489316   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:42.489497   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHUsername
	I0108 21:36:42.489645   52240 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/embed-certs-930023/id_rsa Username:docker}
	I0108 21:36:42.500518   52240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44323
	I0108 21:36:42.500907   52240 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:36:42.501454   52240 main.go:141] libmachine: Using API Version  1
	I0108 21:36:42.501476   52240 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:36:42.501751   52240 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:36:42.501908   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetState
	I0108 21:36:42.503415   52240 main.go:141] libmachine: (embed-certs-930023) Calling .DriverName
	I0108 21:36:42.503645   52240 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:42.503660   52240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:36:42.503677   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHHostname
	I0108 21:36:42.506879   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:42.507241   52240 main.go:141] libmachine: (embed-certs-930023) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:57:1a", ip: ""} in network mk-embed-certs-930023: {Iface:virbr2 ExpiryTime:2024-01-08 22:36:03 +0000 UTC Type:0 Mac:52:54:00:50:57:1a Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:embed-certs-930023 Clientid:01:52:54:00:50:57:1a}
	I0108 21:36:42.507266   52240 main.go:141] libmachine: (embed-certs-930023) DBG | domain embed-certs-930023 has defined IP address 192.168.39.142 and MAC address 52:54:00:50:57:1a in network mk-embed-certs-930023
	I0108 21:36:42.507467   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHPort
	I0108 21:36:42.507595   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHKeyPath
	I0108 21:36:42.507701   52240 main.go:141] libmachine: (embed-certs-930023) Calling .GetSSHUsername
	I0108 21:36:42.507775   52240 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/embed-certs-930023/id_rsa Username:docker}
	I0108 21:36:42.607593   52240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:36:42.666869   52240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:36:42.669841   52240 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 21:36:42.669860   52240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0108 21:36:42.714544   52240 node_ready.go:35] waiting up to 6m0s for node "embed-certs-930023" to be "Ready" ...
	I0108 21:36:42.714764   52240 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0108 21:36:42.744731   52240 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 21:36:42.744753   52240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 21:36:42.780870   52240 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:42.780899   52240 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 21:36:42.831647   52240 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 21:36:44.078135   52240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.411229076s)
	I0108 21:36:44.078219   52240 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:44.078237   52240 main.go:141] libmachine: (embed-certs-930023) Calling .Close
	I0108 21:36:44.078267   52240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.47058337s)
	I0108 21:36:44.078304   52240 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:44.078346   52240 main.go:141] libmachine: (embed-certs-930023) Calling .Close
	I0108 21:36:44.078556   52240 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:44.078572   52240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:44.078596   52240 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:44.078604   52240 main.go:141] libmachine: (embed-certs-930023) Calling .Close
	I0108 21:36:44.078745   52240 main.go:141] libmachine: (embed-certs-930023) DBG | Closing plugin on server side
	I0108 21:36:44.078812   52240 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:44.078827   52240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:44.078832   52240 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:44.078847   52240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:44.078836   52240 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:44.078939   52240 main.go:141] libmachine: (embed-certs-930023) Calling .Close
	I0108 21:36:44.079184   52240 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:44.079199   52240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:44.079216   52240 main.go:141] libmachine: (embed-certs-930023) DBG | Closing plugin on server side
	I0108 21:36:44.094486   52240 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:44.094511   52240 main.go:141] libmachine: (embed-certs-930023) Calling .Close
	I0108 21:36:44.094813   52240 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:44.094836   52240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:44.094852   52240 main.go:141] libmachine: (embed-certs-930023) DBG | Closing plugin on server side
	I0108 21:36:44.201579   52240 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.369888244s)
	I0108 21:36:44.201645   52240 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:44.201657   52240 main.go:141] libmachine: (embed-certs-930023) Calling .Close
	I0108 21:36:44.202039   52240 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:44.202101   52240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:44.202124   52240 main.go:141] libmachine: Making call to close driver server
	I0108 21:36:44.202148   52240 main.go:141] libmachine: (embed-certs-930023) Calling .Close
	I0108 21:36:44.203417   52240 main.go:141] libmachine: (embed-certs-930023) DBG | Closing plugin on server side
	I0108 21:36:44.203455   52240 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:36:44.203464   52240 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:36:44.203473   52240 addons.go:473] Verifying addon metrics-server=true in "embed-certs-930023"
	I0108 21:36:44.205737   52240 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0108 21:36:39.255428   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:41.754396   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:44.207390   52240 addons.go:508] enable addons completed in 1.773100645s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0108 21:36:44.719407   52240 node_ready.go:58] node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:43.708290   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:46.207244   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:44.254930   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:46.267498   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:48.752551   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:47.219592   52240 node_ready.go:58] node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:49.720888   52240 node_ready.go:58] node "embed-certs-930023" has status "Ready":"False"
	I0108 21:36:50.220411   52240 node_ready.go:49] node "embed-certs-930023" has status "Ready":"True"
	I0108 21:36:50.220443   52240 node_ready.go:38] duration metric: took 7.50586854s waiting for node "embed-certs-930023" to be "Ready" ...
	I0108 21:36:50.220456   52240 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:36:50.231598   52240 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jlpx5" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:50.241214   52240 pod_ready.go:92] pod "coredns-5dd5756b68-jlpx5" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:50.241244   52240 pod_ready.go:81] duration metric: took 9.612691ms waiting for pod "coredns-5dd5756b68-jlpx5" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:50.241258   52240 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:50.248950   52240 pod_ready.go:92] pod "etcd-embed-certs-930023" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:50.248980   52240 pod_ready.go:81] duration metric: took 7.712787ms waiting for pod "etcd-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:50.248992   52240 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:48.708526   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:51.208071   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:50.753947   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:53.254275   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:52.257646   52240 pod_ready.go:102] pod "kube-apiserver-embed-certs-930023" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:52.756654   52240 pod_ready.go:92] pod "kube-apiserver-embed-certs-930023" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:52.756686   52240 pod_ready.go:81] duration metric: took 2.507684319s waiting for pod "kube-apiserver-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:52.756701   52240 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:52.762484   52240 pod_ready.go:92] pod "kube-controller-manager-embed-certs-930023" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:52.762510   52240 pod_ready.go:81] duration metric: took 5.80027ms waiting for pod "kube-controller-manager-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:52.762520   52240 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qs2r" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:52.768546   52240 pod_ready.go:92] pod "kube-proxy-8qs2r" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:52.768567   52240 pod_ready.go:81] duration metric: took 6.0407ms waiting for pod "kube-proxy-8qs2r" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:52.768577   52240 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:54.776219   52240 pod_ready.go:102] pod "kube-scheduler-embed-certs-930023" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:53.711228   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:56.207140   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:55.756592   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:58.253769   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:56.791731   52240 pod_ready.go:102] pod "kube-scheduler-embed-certs-930023" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:57.277318   52240 pod_ready.go:92] pod "kube-scheduler-embed-certs-930023" in "kube-system" namespace has status "Ready":"True"
	I0108 21:36:57.277347   52240 pod_ready.go:81] duration metric: took 4.508761695s waiting for pod "kube-scheduler-embed-certs-930023" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:57.277360   52240 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace to be "Ready" ...
	I0108 21:36:59.286249   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:36:58.706848   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:00.707680   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:00.254099   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:02.753365   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:01.785574   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:03.786923   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:06.284087   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:03.206455   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:05.206549   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:04.754051   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:07.253530   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:08.286378   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:10.786106   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:07.210268   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:09.706209   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:11.707035   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:09.256884   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:11.752992   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:13.285580   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:15.785105   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:13.707577   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:16.207867   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:14.254204   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:16.259812   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:18.753430   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:18.285176   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:20.285459   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:18.706062   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:20.707076   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:21.253233   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:23.253611   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:22.786391   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:25.284856   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:23.206263   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:25.705049   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:25.254403   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:27.266601   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:27.285302   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:29.785228   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:27.706348   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:30.208061   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:29.753485   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:31.757824   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:31.787547   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:34.289853   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:32.708893   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:35.206801   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:34.252772   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:36.252994   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:38.755477   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:36.786286   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:39.284804   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:41.284862   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:37.207602   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:39.708423   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:41.253716   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:43.256939   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:43.785493   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:46.285267   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:42.207952   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:44.706778   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:46.707415   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:45.753221   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:48.252642   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:48.789142   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:50.791963   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:49.207916   49554 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:49.707357   49554 pod_ready.go:81] duration metric: took 4m0.008929913s waiting for pod "metrics-server-57f55c9bc5-hs8c4" in "kube-system" namespace to be "Ready" ...
	E0108 21:37:49.707385   49554 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 21:37:49.707391   49554 pod_ready.go:38] duration metric: took 4m2.190773163s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:37:49.707405   49554 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:37:49.707433   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:37:49.707474   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:37:49.780737   49554 cri.go:89] found id: "32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29"
	I0108 21:37:49.780774   49554 cri.go:89] found id: ""
	I0108 21:37:49.780783   49554 logs.go:284] 1 containers: [32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29]
	I0108 21:37:49.780844   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:37:49.787784   49554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:37:49.787859   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:37:49.838001   49554 cri.go:89] found id: "6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f"
	I0108 21:37:49.838026   49554 cri.go:89] found id: ""
	I0108 21:37:49.838036   49554 logs.go:284] 1 containers: [6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f]
	I0108 21:37:49.838091   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:37:49.842293   49554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:37:49.842370   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:37:49.882801   49554 cri.go:89] found id: "25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21"
	I0108 21:37:49.882831   49554 cri.go:89] found id: ""
	I0108 21:37:49.882842   49554 logs.go:284] 1 containers: [25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21]
	I0108 21:37:49.882898   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:37:49.887398   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:37:49.887465   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:37:49.928025   49554 cri.go:89] found id: "c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b"
	I0108 21:37:49.928051   49554 cri.go:89] found id: ""
	I0108 21:37:49.928061   49554 logs.go:284] 1 containers: [c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b]
	I0108 21:37:49.928136   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:37:49.933481   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:37:49.933546   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:37:49.987964   49554 cri.go:89] found id: "85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69"
	I0108 21:37:49.987987   49554 cri.go:89] found id: ""
	I0108 21:37:49.987993   49554 logs.go:284] 1 containers: [85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69]
	I0108 21:37:49.988037   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:37:49.992377   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:37:49.992445   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:37:50.031668   49554 cri.go:89] found id: "9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972"
	I0108 21:37:50.031702   49554 cri.go:89] found id: ""
	I0108 21:37:50.031712   49554 logs.go:284] 1 containers: [9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972]
	I0108 21:37:50.031805   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:37:50.036121   49554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:37:50.036181   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:37:50.076261   49554 cri.go:89] found id: ""
	I0108 21:37:50.076295   49554 logs.go:284] 0 containers: []
	W0108 21:37:50.076305   49554 logs.go:286] No container was found matching "kindnet"
	I0108 21:37:50.076311   49554 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:37:50.076374   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:37:50.118673   49554 cri.go:89] found id: "14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731"
	I0108 21:37:50.118701   49554 cri.go:89] found id: ""
	I0108 21:37:50.118712   49554 logs.go:284] 1 containers: [14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731]
	I0108 21:37:50.118768   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:37:50.122963   49554 logs.go:123] Gathering logs for coredns [25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21] ...
	I0108 21:37:50.122993   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21"
	I0108 21:37:50.166082   49554 logs.go:123] Gathering logs for kube-scheduler [c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b] ...
	I0108 21:37:50.166123   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b"
	I0108 21:37:50.217502   49554 logs.go:123] Gathering logs for kube-proxy [85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69] ...
	I0108 21:37:50.217535   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69"
	I0108 21:37:50.262277   49554 logs.go:123] Gathering logs for kube-controller-manager [9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972] ...
	I0108 21:37:50.262306   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972"
	I0108 21:37:50.323248   49554 logs.go:123] Gathering logs for storage-provisioner [14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731] ...
	I0108 21:37:50.323293   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731"
	I0108 21:37:50.370428   49554 logs.go:123] Gathering logs for dmesg ...
	I0108 21:37:50.370459   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:37:50.385129   49554 logs.go:123] Gathering logs for kube-apiserver [32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29] ...
	I0108 21:37:50.385169   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29"
	I0108 21:37:50.441472   49554 logs.go:123] Gathering logs for etcd [6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f] ...
	I0108 21:37:50.441511   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f"
	I0108 21:37:50.497295   49554 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:37:50.497328   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:37:50.930956   49554 logs.go:123] Gathering logs for container status ...
	I0108 21:37:50.930993   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:37:50.980069   49554 logs.go:123] Gathering logs for kubelet ...
	I0108 21:37:50.980115   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:37:51.058858   49554 logs.go:138] Found kubelet problem: Jan 08 21:33:46 no-preload-420119 kubelet[4247]: W0108 21:33:46.480339    4247 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	W0108 21:37:51.059050   49554 logs.go:138] Found kubelet problem: Jan 08 21:33:46 no-preload-420119 kubelet[4247]: E0108 21:33:46.480413    4247 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	I0108 21:37:51.081348   49554 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:37:51.081407   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:37:51.276689   49554 out.go:309] Setting ErrFile to fd 2...
	I0108 21:37:51.276717   49554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:37:51.276766   49554 out.go:239] X Problems detected in kubelet:
	W0108 21:37:51.276785   49554 out.go:239]   Jan 08 21:33:46 no-preload-420119 kubelet[4247]: W0108 21:33:46.480339    4247 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	W0108 21:37:51.276796   49554 out.go:239]   Jan 08 21:33:46 no-preload-420119 kubelet[4247]: E0108 21:33:46.480413    4247 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	I0108 21:37:51.276806   49554 out.go:309] Setting ErrFile to fd 2...
	I0108 21:37:51.276823   49554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:37:50.258969   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:52.753173   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:53.285930   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:55.290083   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:55.253983   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:57.254372   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:57.785142   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:37:59.786039   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:01.278478   49554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:38:01.299189   49554 api_server.go:72] duration metric: took 4m14.946332328s to wait for apiserver process to appear ...
	I0108 21:38:01.299221   49554 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:38:01.299258   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:38:01.299312   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:38:01.356159   49554 cri.go:89] found id: "32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29"
	I0108 21:38:01.356182   49554 cri.go:89] found id: ""
	I0108 21:38:01.356190   49554 logs.go:284] 1 containers: [32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29]
	I0108 21:38:01.356233   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:01.361107   49554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:38:01.361184   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:38:01.414183   49554 cri.go:89] found id: "6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f"
	I0108 21:38:01.414208   49554 cri.go:89] found id: ""
	I0108 21:38:01.414218   49554 logs.go:284] 1 containers: [6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f]
	I0108 21:38:01.414273   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:01.418394   49554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:38:01.418467   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:38:01.467170   49554 cri.go:89] found id: "25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21"
	I0108 21:38:01.467197   49554 cri.go:89] found id: ""
	I0108 21:38:01.467205   49554 logs.go:284] 1 containers: [25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21]
	I0108 21:38:01.467254   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:01.471841   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:38:01.471980   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:38:01.519030   49554 cri.go:89] found id: "c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b"
	I0108 21:38:01.519062   49554 cri.go:89] found id: ""
	I0108 21:38:01.519072   49554 logs.go:284] 1 containers: [c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b]
	I0108 21:38:01.519130   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:01.524051   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:38:01.524137   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:38:01.568921   49554 cri.go:89] found id: "85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69"
	I0108 21:38:01.568949   49554 cri.go:89] found id: ""
	I0108 21:38:01.568959   49554 logs.go:284] 1 containers: [85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69]
	I0108 21:38:01.569020   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:01.573463   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:38:01.573527   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:38:01.616340   49554 cri.go:89] found id: "9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972"
	I0108 21:38:01.616366   49554 cri.go:89] found id: ""
	I0108 21:38:01.616376   49554 logs.go:284] 1 containers: [9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972]
	I0108 21:38:01.616430   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:01.620977   49554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:38:01.621045   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:38:01.664498   49554 cri.go:89] found id: ""
	I0108 21:38:01.664532   49554 logs.go:284] 0 containers: []
	W0108 21:38:01.664542   49554 logs.go:286] No container was found matching "kindnet"
	I0108 21:38:01.664556   49554 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:38:01.664633   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:38:01.708717   49554 cri.go:89] found id: "14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731"
	I0108 21:38:01.708761   49554 cri.go:89] found id: ""
	I0108 21:38:01.708770   49554 logs.go:284] 1 containers: [14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731]
	I0108 21:38:01.708830   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:01.714059   49554 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:38:01.714089   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:37:59.755028   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:02.255487   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:01.786549   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:03.787723   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:06.286657   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:02.112248   49554 logs.go:123] Gathering logs for kubelet ...
	I0108 21:38:02.112290   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:38:02.190722   49554 logs.go:138] Found kubelet problem: Jan 08 21:33:46 no-preload-420119 kubelet[4247]: W0108 21:33:46.480339    4247 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	W0108 21:38:02.190930   49554 logs.go:138] Found kubelet problem: Jan 08 21:33:46 no-preload-420119 kubelet[4247]: E0108 21:33:46.480413    4247 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	I0108 21:38:02.213535   49554 logs.go:123] Gathering logs for dmesg ...
	I0108 21:38:02.213577   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:38:02.233926   49554 logs.go:123] Gathering logs for etcd [6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f] ...
	I0108 21:38:02.233960   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f"
	I0108 21:38:02.283826   49554 logs.go:123] Gathering logs for coredns [25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21] ...
	I0108 21:38:02.283863   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21"
	I0108 21:38:02.326667   49554 logs.go:123] Gathering logs for kube-controller-manager [9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972] ...
	I0108 21:38:02.326695   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972"
	I0108 21:38:02.389080   49554 logs.go:123] Gathering logs for storage-provisioner [14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731] ...
	I0108 21:38:02.389138   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731"
	I0108 21:38:02.438789   49554 logs.go:123] Gathering logs for container status ...
	I0108 21:38:02.438823   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:38:02.492200   49554 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:38:02.492236   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:38:02.622729   49554 logs.go:123] Gathering logs for kube-apiserver [32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29] ...
	I0108 21:38:02.622764   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29"
	I0108 21:38:02.679733   49554 logs.go:123] Gathering logs for kube-scheduler [c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b] ...
	I0108 21:38:02.679773   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b"
	I0108 21:38:02.736516   49554 logs.go:123] Gathering logs for kube-proxy [85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69] ...
	I0108 21:38:02.736553   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69"
	I0108 21:38:02.785979   49554 out.go:309] Setting ErrFile to fd 2...
	I0108 21:38:02.786016   49554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:38:02.786087   49554 out.go:239] X Problems detected in kubelet:
	W0108 21:38:02.786110   49554 out.go:239]   Jan 08 21:33:46 no-preload-420119 kubelet[4247]: W0108 21:33:46.480339    4247 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	W0108 21:38:02.786119   49554 out.go:239]   Jan 08 21:33:46 no-preload-420119 kubelet[4247]: E0108 21:33:46.480413    4247 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	I0108 21:38:02.786129   49554 out.go:309] Setting ErrFile to fd 2...
	I0108 21:38:02.786137   49554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:38:04.753794   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:06.754047   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:08.785681   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:11.285209   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:09.252717   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:11.254002   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:13.254165   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:13.785530   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:15.788617   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:12.786551   49554 api_server.go:253] Checking apiserver healthz at https://192.168.83.226:8443/healthz ...
	I0108 21:38:12.792206   49554 api_server.go:279] https://192.168.83.226:8443/healthz returned 200:
	ok
	I0108 21:38:12.794897   49554 api_server.go:141] control plane version: v1.29.0-rc.2
	I0108 21:38:12.794927   49554 api_server.go:131] duration metric: took 11.495698159s to wait for apiserver health ...
	I0108 21:38:12.794939   49554 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:38:12.794965   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:38:12.795014   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:38:12.837787   49554 cri.go:89] found id: "32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29"
	I0108 21:38:12.837808   49554 cri.go:89] found id: ""
	I0108 21:38:12.837815   49554 logs.go:284] 1 containers: [32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29]
	I0108 21:38:12.837876   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:12.844342   49554 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:38:12.844410   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:38:12.891371   49554 cri.go:89] found id: "6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f"
	I0108 21:38:12.891395   49554 cri.go:89] found id: ""
	I0108 21:38:12.891404   49554 logs.go:284] 1 containers: [6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f]
	I0108 21:38:12.891456   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:12.895932   49554 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:38:12.895991   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:38:12.941507   49554 cri.go:89] found id: "25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21"
	I0108 21:38:12.941541   49554 cri.go:89] found id: ""
	I0108 21:38:12.941551   49554 logs.go:284] 1 containers: [25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21]
	I0108 21:38:12.941608   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:12.945936   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:38:12.945990   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:38:12.990818   49554 cri.go:89] found id: "c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b"
	I0108 21:38:12.990846   49554 cri.go:89] found id: ""
	I0108 21:38:12.990860   49554 logs.go:284] 1 containers: [c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b]
	I0108 21:38:12.990920   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:12.994929   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:38:12.994989   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:38:13.040914   49554 cri.go:89] found id: "85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69"
	I0108 21:38:13.040946   49554 cri.go:89] found id: ""
	I0108 21:38:13.040956   49554 logs.go:284] 1 containers: [85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69]
	I0108 21:38:13.041011   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:13.045532   49554 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:38:13.045592   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:38:13.090872   49554 cri.go:89] found id: "9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972"
	I0108 21:38:13.090899   49554 cri.go:89] found id: ""
	I0108 21:38:13.090909   49554 logs.go:284] 1 containers: [9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972]
	I0108 21:38:13.090965   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:13.096193   49554 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:38:13.096301   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:38:13.139578   49554 cri.go:89] found id: ""
	I0108 21:38:13.139610   49554 logs.go:284] 0 containers: []
	W0108 21:38:13.139620   49554 logs.go:286] No container was found matching "kindnet"
	I0108 21:38:13.139628   49554 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:38:13.139685   49554 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:38:13.178982   49554 cri.go:89] found id: "14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731"
	I0108 21:38:13.179004   49554 cri.go:89] found id: ""
	I0108 21:38:13.179011   49554 logs.go:284] 1 containers: [14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731]
	I0108 21:38:13.179054   49554 ssh_runner.go:195] Run: which crictl
	I0108 21:38:13.183123   49554 logs.go:123] Gathering logs for coredns [25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21] ...
	I0108 21:38:13.183144   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21"
	I0108 21:38:13.221303   49554 logs.go:123] Gathering logs for kube-scheduler [c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b] ...
	I0108 21:38:13.221330   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b"
	I0108 21:38:13.271841   49554 logs.go:123] Gathering logs for kube-proxy [85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69] ...
	I0108 21:38:13.271883   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69"
	I0108 21:38:13.318242   49554 logs.go:123] Gathering logs for kube-controller-manager [9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972] ...
	I0108 21:38:13.318282   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972"
	I0108 21:38:13.384268   49554 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:38:13.384308   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:38:13.765898   49554 logs.go:123] Gathering logs for container status ...
	I0108 21:38:13.765935   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:38:13.820129   49554 logs.go:123] Gathering logs for kubelet ...
	I0108 21:38:13.820163   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0108 21:38:13.888888   49554 logs.go:138] Found kubelet problem: Jan 08 21:33:46 no-preload-420119 kubelet[4247]: W0108 21:33:46.480339    4247 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	W0108 21:38:13.889053   49554 logs.go:138] Found kubelet problem: Jan 08 21:33:46 no-preload-420119 kubelet[4247]: E0108 21:33:46.480413    4247 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	I0108 21:38:13.910068   49554 logs.go:123] Gathering logs for dmesg ...
	I0108 21:38:13.910095   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:38:13.927775   49554 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:38:13.927818   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:38:14.055715   49554 logs.go:123] Gathering logs for kube-apiserver [32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29] ...
	I0108 21:38:14.055745   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29"
	I0108 21:38:14.117063   49554 logs.go:123] Gathering logs for etcd [6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f] ...
	I0108 21:38:14.117099   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f"
	I0108 21:38:14.165121   49554 logs.go:123] Gathering logs for storage-provisioner [14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731] ...
	I0108 21:38:14.165154   49554 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731"
	I0108 21:38:14.204018   49554 out.go:309] Setting ErrFile to fd 2...
	I0108 21:38:14.204048   49554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0108 21:38:14.204132   49554 out.go:239] X Problems detected in kubelet:
	W0108 21:38:14.204147   49554 out.go:239]   Jan 08 21:33:46 no-preload-420119 kubelet[4247]: W0108 21:33:46.480339    4247 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	W0108 21:38:14.204163   49554 out.go:239]   Jan 08 21:33:46 no-preload-420119 kubelet[4247]: E0108 21:33:46.480413    4247 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-420119" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-420119' and this object
	I0108 21:38:14.204175   49554 out.go:309] Setting ErrFile to fd 2...
	I0108 21:38:14.204183   49554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:38:15.753722   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:18.254553   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:18.285252   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:20.286050   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:20.752773   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:23.253438   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:24.214424   49554 system_pods.go:59] 8 kube-system pods found
	I0108 21:38:24.214455   49554 system_pods.go:61] "coredns-76f75df574-5jpjt" [23b66e29-32aa-4fc1-aa5f-18d774c4e374] Running
	I0108 21:38:24.214462   49554 system_pods.go:61] "etcd-no-preload-420119" [21656b2f-4872-4b06-ad70-87be737db371] Running
	I0108 21:38:24.214469   49554 system_pods.go:61] "kube-apiserver-no-preload-420119" [b7963b4d-6765-4996-a5e0-d33965862b92] Running
	I0108 21:38:24.214475   49554 system_pods.go:61] "kube-controller-manager-no-preload-420119" [c5e43cf5-c29d-4d83-a477-dd032c0c995c] Running
	I0108 21:38:24.214481   49554 system_pods.go:61] "kube-proxy-pxmhr" [a48789b6-fff3-4280-a96a-9d6595e5b8f6] Running
	I0108 21:38:24.214488   49554 system_pods.go:61] "kube-scheduler-no-preload-420119" [678b4d20-50eb-4275-9880-32f5eb4fa33e] Running
	I0108 21:38:24.214500   49554 system_pods.go:61] "metrics-server-57f55c9bc5-hs8c4" [84ed3a25-aa09-43c0-b994-e6dec44965ba] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:38:24.214507   49554 system_pods.go:61] "storage-provisioner" [e24c8545-1e62-4aa0-b8ae-351115323e3c] Running
	I0108 21:38:24.214518   49554 system_pods.go:74] duration metric: took 11.419572078s to wait for pod list to return data ...
	I0108 21:38:24.214539   49554 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:38:24.217406   49554 default_sa.go:45] found service account: "default"
	I0108 21:38:24.217429   49554 default_sa.go:55] duration metric: took 2.884789ms for default service account to be created ...
	I0108 21:38:24.217437   49554 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:38:24.225816   49554 system_pods.go:86] 8 kube-system pods found
	I0108 21:38:24.225853   49554 system_pods.go:89] "coredns-76f75df574-5jpjt" [23b66e29-32aa-4fc1-aa5f-18d774c4e374] Running
	I0108 21:38:24.225862   49554 system_pods.go:89] "etcd-no-preload-420119" [21656b2f-4872-4b06-ad70-87be737db371] Running
	I0108 21:38:24.225869   49554 system_pods.go:89] "kube-apiserver-no-preload-420119" [b7963b4d-6765-4996-a5e0-d33965862b92] Running
	I0108 21:38:24.225876   49554 system_pods.go:89] "kube-controller-manager-no-preload-420119" [c5e43cf5-c29d-4d83-a477-dd032c0c995c] Running
	I0108 21:38:24.225882   49554 system_pods.go:89] "kube-proxy-pxmhr" [a48789b6-fff3-4280-a96a-9d6595e5b8f6] Running
	I0108 21:38:24.225888   49554 system_pods.go:89] "kube-scheduler-no-preload-420119" [678b4d20-50eb-4275-9880-32f5eb4fa33e] Running
	I0108 21:38:24.225900   49554 system_pods.go:89] "metrics-server-57f55c9bc5-hs8c4" [84ed3a25-aa09-43c0-b994-e6dec44965ba] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:38:24.225917   49554 system_pods.go:89] "storage-provisioner" [e24c8545-1e62-4aa0-b8ae-351115323e3c] Running
	I0108 21:38:24.225928   49554 system_pods.go:126] duration metric: took 8.484873ms to wait for k8s-apps to be running ...
	I0108 21:38:24.225937   49554 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:38:24.225993   49554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:38:24.258727   49554 system_svc.go:56] duration metric: took 32.781962ms WaitForService to wait for kubelet.
	I0108 21:38:24.258757   49554 kubeadm.go:581] duration metric: took 4m37.905908618s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:38:24.258783   49554 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:38:24.264063   49554 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:38:24.264107   49554 node_conditions.go:123] node cpu capacity is 2
	I0108 21:38:24.264124   49554 node_conditions.go:105] duration metric: took 5.335443ms to run NodePressure ...
	I0108 21:38:24.264138   49554 start.go:228] waiting for startup goroutines ...
	I0108 21:38:24.264147   49554 start.go:233] waiting for cluster config update ...
	I0108 21:38:24.264162   49554 start.go:242] writing updated cluster config ...
	I0108 21:38:24.264527   49554 ssh_runner.go:195] Run: rm -f paused
	I0108 21:38:24.316726   49554 start.go:600] kubectl: 1.29.0, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0108 21:38:24.319068   49554 out.go:177] * Done! kubectl is now configured to use "no-preload-420119" cluster and "default" namespace by default
	I0108 21:38:22.786834   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:25.286115   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:25.253802   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:27.254345   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:27.787248   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:30.285184   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:29.753855   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:31.756467   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:32.784705   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:34.785840   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:34.255766   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:36.754492   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:36.786122   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:39.285322   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:39.252373   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:41.252642   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:43.254448   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:41.785445   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:43.786773   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:46.285566   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:45.753027   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:47.754143   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:48.785147   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:51.288180   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:49.754749   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:52.253959   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:53.785303   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:56.285612   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:54.755661   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:57.251896   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:58.786580   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:00.786756   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:38:59.253005   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:01.255078   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:03.256283   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:03.284996   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:05.785047   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:05.753386   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:08.255425   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:07.786701   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:10.286400   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:10.753339   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:13.252496   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:12.785155   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:14.786155   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:15.753737   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:18.253635   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:17.286437   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:19.784253   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:20.756650   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:23.252565   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:21.785481   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:23.786231   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:26.285745   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:25.254390   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:27.752999   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:28.787634   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:31.284925   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:29.753783   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:31.755035   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:33.285207   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:35.786894   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:34.252500   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:36.252818   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:38.254738   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:37.788011   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:40.284577   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:40.755522   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:43.253459   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:42.288084   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:44.785082   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:45.255820   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:47.753759   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:46.786273   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:49.284498   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:51.285053   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:50.253731   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:52.254168   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:53.785291   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:55.785844   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:54.255061   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:56.754166   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:58.285868   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:00.786340   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:39:59.252439   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:01.253062   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:03.254544   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:03.285408   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:05.285799   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:05.754163   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:08.254809   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:07.786090   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:10.285192   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:10.757519   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:13.254910   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:12.286000   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:14.785220   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:15.753042   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:17.757797   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:16.787398   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:19.284584   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:21.286291   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:20.260330   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:22.753834   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:23.791283   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:26.285235   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:25.253581   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:27.253929   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:28.787306   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:30.791316   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:29.254115   52569 pod_ready.go:102] pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:30.253138   52569 pod_ready.go:81] duration metric: took 4m0.008006343s waiting for pod "metrics-server-57f55c9bc5-46dvw" in "kube-system" namespace to be "Ready" ...
	E0108 21:40:30.253160   52569 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 21:40:30.253167   52569 pod_ready.go:38] duration metric: took 4m4.062595318s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:40:30.253181   52569 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:40:30.253214   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:40:30.253257   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:40:30.319693   52569 cri.go:89] found id: "c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd"
	I0108 21:40:30.319715   52569 cri.go:89] found id: ""
	I0108 21:40:30.319725   52569 logs.go:284] 1 containers: [c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd]
	I0108 21:40:30.319780   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:30.325106   52569 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:40:30.325164   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:40:30.365240   52569 cri.go:89] found id: "079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a"
	I0108 21:40:30.365261   52569 cri.go:89] found id: ""
	I0108 21:40:30.365269   52569 logs.go:284] 1 containers: [079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a]
	I0108 21:40:30.365316   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:30.370308   52569 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:40:30.370382   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:40:30.419180   52569 cri.go:89] found id: "d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3"
	I0108 21:40:30.419207   52569 cri.go:89] found id: ""
	I0108 21:40:30.419216   52569 logs.go:284] 1 containers: [d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3]
	I0108 21:40:30.419279   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:30.424415   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:40:30.424476   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:40:30.464713   52569 cri.go:89] found id: "419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6"
	I0108 21:40:30.464742   52569 cri.go:89] found id: ""
	I0108 21:40:30.464749   52569 logs.go:284] 1 containers: [419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6]
	I0108 21:40:30.464798   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:30.469955   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:40:30.470027   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:40:30.509540   52569 cri.go:89] found id: "6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f"
	I0108 21:40:30.509567   52569 cri.go:89] found id: ""
	I0108 21:40:30.509576   52569 logs.go:284] 1 containers: [6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f]
	I0108 21:40:30.509633   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:30.514415   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:40:30.514501   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:40:30.557916   52569 cri.go:89] found id: "14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2"
	I0108 21:40:30.557942   52569 cri.go:89] found id: ""
	I0108 21:40:30.557950   52569 logs.go:284] 1 containers: [14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2]
	I0108 21:40:30.557995   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:30.562782   52569 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:40:30.562854   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:40:30.615198   52569 cri.go:89] found id: ""
	I0108 21:40:30.615230   52569 logs.go:284] 0 containers: []
	W0108 21:40:30.615241   52569 logs.go:286] No container was found matching "kindnet"
	I0108 21:40:30.615261   52569 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:40:30.615333   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:40:30.658965   52569 cri.go:89] found id: "5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348"
	I0108 21:40:30.658992   52569 cri.go:89] found id: "a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4"
	I0108 21:40:30.658999   52569 cri.go:89] found id: ""
	I0108 21:40:30.659008   52569 logs.go:284] 2 containers: [5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348 a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4]
	I0108 21:40:30.659068   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:30.664426   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:30.671701   52569 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:40:30.671728   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:40:30.844606   52569 logs.go:123] Gathering logs for kube-apiserver [c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd] ...
	I0108 21:40:30.844641   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd"
	I0108 21:40:30.899497   52569 logs.go:123] Gathering logs for storage-provisioner [5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348] ...
	I0108 21:40:30.899529   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348"
	I0108 21:40:30.940272   52569 logs.go:123] Gathering logs for kubelet ...
	I0108 21:40:30.940299   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 21:40:30.998372   52569 logs.go:123] Gathering logs for kube-scheduler [419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6] ...
	I0108 21:40:30.998412   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6"
	I0108 21:40:31.046376   52569 logs.go:123] Gathering logs for container status ...
	I0108 21:40:31.046404   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:40:31.097770   52569 logs.go:123] Gathering logs for dmesg ...
	I0108 21:40:31.097806   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:40:31.112264   52569 logs.go:123] Gathering logs for coredns [d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3] ...
	I0108 21:40:31.112297   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3"
	I0108 21:40:31.158356   52569 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:40:31.158390   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:40:31.647561   52569 logs.go:123] Gathering logs for storage-provisioner [a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4] ...
	I0108 21:40:31.647608   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4"
	I0108 21:40:31.707603   52569 logs.go:123] Gathering logs for etcd [079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a] ...
	I0108 21:40:31.707635   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a"
	I0108 21:40:31.771083   52569 logs.go:123] Gathering logs for kube-proxy [6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f] ...
	I0108 21:40:31.771137   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f"
	I0108 21:40:31.828302   52569 logs.go:123] Gathering logs for kube-controller-manager [14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2] ...
	I0108 21:40:31.828330   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2"
	I0108 21:40:33.285661   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:35.786008   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:34.400202   52569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:40:34.417847   52569 api_server.go:72] duration metric: took 4m15.608936782s to wait for apiserver process to appear ...
	I0108 21:40:34.417875   52569 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:40:34.417910   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:40:34.417971   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:40:34.469473   52569 cri.go:89] found id: "c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd"
	I0108 21:40:34.469498   52569 cri.go:89] found id: ""
	I0108 21:40:34.469508   52569 logs.go:284] 1 containers: [c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd]
	I0108 21:40:34.469558   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:34.474212   52569 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:40:34.474279   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:40:34.512914   52569 cri.go:89] found id: "079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a"
	I0108 21:40:34.512938   52569 cri.go:89] found id: ""
	I0108 21:40:34.512947   52569 logs.go:284] 1 containers: [079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a]
	I0108 21:40:34.513002   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:34.518312   52569 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:40:34.518379   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:40:34.566871   52569 cri.go:89] found id: "d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3"
	I0108 21:40:34.566896   52569 cri.go:89] found id: ""
	I0108 21:40:34.566907   52569 logs.go:284] 1 containers: [d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3]
	I0108 21:40:34.566952   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:34.571256   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:40:34.571322   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:40:34.611788   52569 cri.go:89] found id: "419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6"
	I0108 21:40:34.611818   52569 cri.go:89] found id: ""
	I0108 21:40:34.611827   52569 logs.go:284] 1 containers: [419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6]
	I0108 21:40:34.611883   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:34.616343   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:40:34.616428   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:40:34.666185   52569 cri.go:89] found id: "6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f"
	I0108 21:40:34.666212   52569 cri.go:89] found id: ""
	I0108 21:40:34.666221   52569 logs.go:284] 1 containers: [6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f]
	I0108 21:40:34.666280   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:34.673299   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:40:34.673361   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:40:34.715806   52569 cri.go:89] found id: "14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2"
	I0108 21:40:34.715832   52569 cri.go:89] found id: ""
	I0108 21:40:34.715842   52569 logs.go:284] 1 containers: [14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2]
	I0108 21:40:34.715911   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:34.720429   52569 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:40:34.720497   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:40:34.767453   52569 cri.go:89] found id: ""
	I0108 21:40:34.767481   52569 logs.go:284] 0 containers: []
	W0108 21:40:34.767491   52569 logs.go:286] No container was found matching "kindnet"
	I0108 21:40:34.767499   52569 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:40:34.767564   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:40:34.811517   52569 cri.go:89] found id: "5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348"
	I0108 21:40:34.811535   52569 cri.go:89] found id: "a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4"
	I0108 21:40:34.811540   52569 cri.go:89] found id: ""
	I0108 21:40:34.811549   52569 logs.go:284] 2 containers: [5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348 a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4]
	I0108 21:40:34.811593   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:34.817272   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:34.821689   52569 logs.go:123] Gathering logs for kube-proxy [6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f] ...
	I0108 21:40:34.821711   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f"
	I0108 21:40:34.867324   52569 logs.go:123] Gathering logs for kubelet ...
	I0108 21:40:34.867370   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 21:40:34.926748   52569 logs.go:123] Gathering logs for dmesg ...
	I0108 21:40:34.926781   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:40:34.941511   52569 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:40:34.941542   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:40:35.080943   52569 logs.go:123] Gathering logs for kube-apiserver [c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd] ...
	I0108 21:40:35.080988   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd"
	I0108 21:40:35.135288   52569 logs.go:123] Gathering logs for etcd [079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a] ...
	I0108 21:40:35.135322   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a"
	I0108 21:40:35.184539   52569 logs.go:123] Gathering logs for storage-provisioner [5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348] ...
	I0108 21:40:35.184572   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348"
	I0108 21:40:35.227566   52569 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:40:35.227595   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:40:35.640322   52569 logs.go:123] Gathering logs for coredns [d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3] ...
	I0108 21:40:35.640360   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3"
	I0108 21:40:35.691431   52569 logs.go:123] Gathering logs for kube-scheduler [419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6] ...
	I0108 21:40:35.691483   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6"
	I0108 21:40:35.732033   52569 logs.go:123] Gathering logs for kube-controller-manager [14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2] ...
	I0108 21:40:35.732067   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2"
	I0108 21:40:35.799064   52569 logs.go:123] Gathering logs for storage-provisioner [a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4] ...
	I0108 21:40:35.799097   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4"
	I0108 21:40:35.845556   52569 logs.go:123] Gathering logs for container status ...
	I0108 21:40:35.845591   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:40:38.394929   52569 api_server.go:253] Checking apiserver healthz at https://192.168.50.165:8444/healthz ...
	I0108 21:40:38.400520   52569 api_server.go:279] https://192.168.50.165:8444/healthz returned 200:
	ok
	I0108 21:40:38.401722   52569 api_server.go:141] control plane version: v1.28.4
	I0108 21:40:38.401742   52569 api_server.go:131] duration metric: took 3.983861748s to wait for apiserver health ...
	I0108 21:40:38.401750   52569 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:40:38.401771   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:40:38.401827   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:40:38.442639   52569 cri.go:89] found id: "c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd"
	I0108 21:40:38.442663   52569 cri.go:89] found id: ""
	I0108 21:40:38.442674   52569 logs.go:284] 1 containers: [c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd]
	I0108 21:40:38.442736   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:38.447343   52569 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:40:38.447408   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:40:38.487582   52569 cri.go:89] found id: "079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a"
	I0108 21:40:38.487612   52569 cri.go:89] found id: ""
	I0108 21:40:38.487622   52569 logs.go:284] 1 containers: [079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a]
	I0108 21:40:38.487682   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:38.492752   52569 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:40:38.492821   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:40:38.536326   52569 cri.go:89] found id: "d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3"
	I0108 21:40:38.536352   52569 cri.go:89] found id: ""
	I0108 21:40:38.536362   52569 logs.go:284] 1 containers: [d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3]
	I0108 21:40:38.536414   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:38.541253   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:40:38.541331   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:40:38.584785   52569 cri.go:89] found id: "419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6"
	I0108 21:40:38.584811   52569 cri.go:89] found id: ""
	I0108 21:40:38.584818   52569 logs.go:284] 1 containers: [419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6]
	I0108 21:40:38.584872   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:38.589846   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:40:38.589917   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:40:38.642931   52569 cri.go:89] found id: "6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f"
	I0108 21:40:38.642963   52569 cri.go:89] found id: ""
	I0108 21:40:38.642973   52569 logs.go:284] 1 containers: [6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f]
	I0108 21:40:38.643028   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:38.648685   52569 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:40:38.648828   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:40:38.697934   52569 cri.go:89] found id: "14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2"
	I0108 21:40:38.697958   52569 cri.go:89] found id: ""
	I0108 21:40:38.697965   52569 logs.go:284] 1 containers: [14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2]
	I0108 21:40:38.698017   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:38.702766   52569 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:40:38.702855   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:40:38.745659   52569 cri.go:89] found id: ""
	I0108 21:40:38.745706   52569 logs.go:284] 0 containers: []
	W0108 21:40:38.745717   52569 logs.go:286] No container was found matching "kindnet"
	I0108 21:40:38.745724   52569 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:40:38.745800   52569 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:40:38.793521   52569 cri.go:89] found id: "5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348"
	I0108 21:40:38.793551   52569 cri.go:89] found id: "a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4"
	I0108 21:40:38.793558   52569 cri.go:89] found id: ""
	I0108 21:40:38.793571   52569 logs.go:284] 2 containers: [5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348 a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4]
	I0108 21:40:38.793637   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:38.798426   52569 ssh_runner.go:195] Run: which crictl
	I0108 21:40:38.802782   52569 logs.go:123] Gathering logs for kube-proxy [6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f] ...
	I0108 21:40:38.802803   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f"
	I0108 21:40:38.849368   52569 logs.go:123] Gathering logs for container status ...
	I0108 21:40:38.849448   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:40:38.896470   52569 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:40:38.896499   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:40:39.028627   52569 logs.go:123] Gathering logs for kube-controller-manager [14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2] ...
	I0108 21:40:39.028659   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2"
	I0108 21:40:39.091568   52569 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:40:39.091602   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:40:38.285149   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:40.786212   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:39.502349   52569 logs.go:123] Gathering logs for kube-apiserver [c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd] ...
	I0108 21:40:39.502388   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd"
	I0108 21:40:39.557114   52569 logs.go:123] Gathering logs for etcd [079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a] ...
	I0108 21:40:39.557148   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a"
	I0108 21:40:39.609417   52569 logs.go:123] Gathering logs for storage-provisioner [5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348] ...
	I0108 21:40:39.609464   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348"
	I0108 21:40:39.650119   52569 logs.go:123] Gathering logs for storage-provisioner [a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4] ...
	I0108 21:40:39.650163   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4"
	I0108 21:40:39.702853   52569 logs.go:123] Gathering logs for kubelet ...
	I0108 21:40:39.702888   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 21:40:39.762759   52569 logs.go:123] Gathering logs for dmesg ...
	I0108 21:40:39.762797   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:40:39.779806   52569 logs.go:123] Gathering logs for coredns [d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3] ...
	I0108 21:40:39.779838   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3"
	I0108 21:40:39.821110   52569 logs.go:123] Gathering logs for kube-scheduler [419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6] ...
	I0108 21:40:39.821144   52569 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6"
	I0108 21:40:42.380854   52569 system_pods.go:59] 8 kube-system pods found
	I0108 21:40:42.380881   52569 system_pods.go:61] "coredns-5dd5756b68-92m44" [048c7bfa-ea87-4f91-b002-c30fe11cac2a] Running
	I0108 21:40:42.380887   52569 system_pods.go:61] "etcd-default-k8s-diff-port-690577" [4fd93437-1a2a-499b-8266-21530044d7b0] Running
	I0108 21:40:42.380891   52569 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-690577" [84e50b6e-165c-4fb9-9127-c6ec504a23b1] Running
	I0108 21:40:42.380897   52569 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-690577" [2419d5e1-1b44-4bce-a603-99d1e64547ec] Running
	I0108 21:40:42.380901   52569 system_pods.go:61] "kube-proxy-qzxt5" [89e4ed5e-f9af-4a21-b744-73f9a3c4deda] Running
	I0108 21:40:42.380905   52569 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-690577" [fd74bf90-bef0-4a31-86dd-6999f46bc2e4] Running
	I0108 21:40:42.380912   52569 system_pods.go:61] "metrics-server-57f55c9bc5-46dvw" [6c095070-fdfd-4d65-b0b4-b4c234fad85d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:40:42.380917   52569 system_pods.go:61] "storage-provisioner" [69c923fb-6414-4802-9420-c02694250e2d] Running
	I0108 21:40:42.380925   52569 system_pods.go:74] duration metric: took 3.979170275s to wait for pod list to return data ...
	I0108 21:40:42.380932   52569 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:40:42.384574   52569 default_sa.go:45] found service account: "default"
	I0108 21:40:42.384603   52569 default_sa.go:55] duration metric: took 3.663117ms for default service account to be created ...
	I0108 21:40:42.384613   52569 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:40:42.390394   52569 system_pods.go:86] 8 kube-system pods found
	I0108 21:40:42.390417   52569 system_pods.go:89] "coredns-5dd5756b68-92m44" [048c7bfa-ea87-4f91-b002-c30fe11cac2a] Running
	I0108 21:40:42.390422   52569 system_pods.go:89] "etcd-default-k8s-diff-port-690577" [4fd93437-1a2a-499b-8266-21530044d7b0] Running
	I0108 21:40:42.390427   52569 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-690577" [84e50b6e-165c-4fb9-9127-c6ec504a23b1] Running
	I0108 21:40:42.390431   52569 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-690577" [2419d5e1-1b44-4bce-a603-99d1e64547ec] Running
	I0108 21:40:42.390435   52569 system_pods.go:89] "kube-proxy-qzxt5" [89e4ed5e-f9af-4a21-b744-73f9a3c4deda] Running
	I0108 21:40:42.390439   52569 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-690577" [fd74bf90-bef0-4a31-86dd-6999f46bc2e4] Running
	I0108 21:40:42.390446   52569 system_pods.go:89] "metrics-server-57f55c9bc5-46dvw" [6c095070-fdfd-4d65-b0b4-b4c234fad85d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:40:42.390451   52569 system_pods.go:89] "storage-provisioner" [69c923fb-6414-4802-9420-c02694250e2d] Running
	I0108 21:40:42.390458   52569 system_pods.go:126] duration metric: took 5.84001ms to wait for k8s-apps to be running ...
	I0108 21:40:42.390465   52569 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:40:42.390507   52569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:40:42.408578   52569 system_svc.go:56] duration metric: took 18.102176ms WaitForService to wait for kubelet.
	I0108 21:40:42.408610   52569 kubeadm.go:581] duration metric: took 4m23.599703434s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:40:42.408637   52569 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:40:42.413530   52569 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:40:42.413555   52569 node_conditions.go:123] node cpu capacity is 2
	I0108 21:40:42.413566   52569 node_conditions.go:105] duration metric: took 4.924366ms to run NodePressure ...
	I0108 21:40:42.413576   52569 start.go:228] waiting for startup goroutines ...
	I0108 21:40:42.413582   52569 start.go:233] waiting for cluster config update ...
	I0108 21:40:42.413591   52569 start.go:242] writing updated cluster config ...
	I0108 21:40:42.413908   52569 ssh_runner.go:195] Run: rm -f paused
	I0108 21:40:42.465003   52569 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:40:42.467462   52569 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-690577" cluster and "default" namespace by default
	I0108 21:40:43.286281   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:45.785933   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:48.284395   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:50.284942   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:52.285443   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:54.785447   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:56.785487   52240 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace has status "Ready":"False"
	I0108 21:40:57.284867   52240 pod_ready.go:81] duration metric: took 4m0.007494999s waiting for pod "metrics-server-57f55c9bc5-rj499" in "kube-system" namespace to be "Ready" ...
	E0108 21:40:57.284891   52240 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0108 21:40:57.284899   52240 pod_ready.go:38] duration metric: took 4m7.064431939s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:40:57.284912   52240 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:40:57.284935   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:40:57.284974   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:40:57.354328   52240 cri.go:89] found id: "aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267"
	I0108 21:40:57.354357   52240 cri.go:89] found id: ""
	I0108 21:40:57.354365   52240 logs.go:284] 1 containers: [aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267]
	I0108 21:40:57.354429   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:40:57.359369   52240 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:40:57.359445   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:40:57.409950   52240 cri.go:89] found id: "07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e"
	I0108 21:40:57.409972   52240 cri.go:89] found id: ""
	I0108 21:40:57.409981   52240 logs.go:284] 1 containers: [07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e]
	I0108 21:40:57.410039   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:40:57.414852   52240 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:40:57.414927   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:40:57.463291   52240 cri.go:89] found id: "040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320"
	I0108 21:40:57.463317   52240 cri.go:89] found id: ""
	I0108 21:40:57.463325   52240 logs.go:284] 1 containers: [040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320]
	I0108 21:40:57.463378   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:40:57.468450   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:40:57.468522   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:40:57.513334   52240 cri.go:89] found id: "18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b"
	I0108 21:40:57.513362   52240 cri.go:89] found id: ""
	I0108 21:40:57.513374   52240 logs.go:284] 1 containers: [18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b]
	I0108 21:40:57.513452   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:40:57.518000   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:40:57.518069   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:40:57.570185   52240 cri.go:89] found id: "ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1"
	I0108 21:40:57.570206   52240 cri.go:89] found id: ""
	I0108 21:40:57.570213   52240 logs.go:284] 1 containers: [ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1]
	I0108 21:40:57.570260   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:40:57.575510   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:40:57.575582   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:40:57.618961   52240 cri.go:89] found id: "3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87"
	I0108 21:40:57.618990   52240 cri.go:89] found id: ""
	I0108 21:40:57.618999   52240 logs.go:284] 1 containers: [3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87]
	I0108 21:40:57.619058   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:40:57.623808   52240 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:40:57.623896   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:40:57.677552   52240 cri.go:89] found id: ""
	I0108 21:40:57.677584   52240 logs.go:284] 0 containers: []
	W0108 21:40:57.677594   52240 logs.go:286] No container was found matching "kindnet"
	I0108 21:40:57.677601   52240 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:40:57.677659   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:40:57.719917   52240 cri.go:89] found id: "60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c"
	I0108 21:40:57.719937   52240 cri.go:89] found id: "82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5"
	I0108 21:40:57.719941   52240 cri.go:89] found id: ""
	I0108 21:40:57.719948   52240 logs.go:284] 2 containers: [60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c 82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5]
	I0108 21:40:57.720008   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:40:57.724452   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:40:57.729396   52240 logs.go:123] Gathering logs for dmesg ...
	I0108 21:40:57.729422   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:40:57.743343   52240 logs.go:123] Gathering logs for coredns [040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320] ...
	I0108 21:40:57.743372   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320"
	I0108 21:40:57.784498   52240 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:40:57.784530   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:40:58.284347   52240 logs.go:123] Gathering logs for container status ...
	I0108 21:40:58.284383   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:40:58.335315   52240 logs.go:123] Gathering logs for kubelet ...
	I0108 21:40:58.335343   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 21:40:58.391548   52240 logs.go:123] Gathering logs for etcd [07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e] ...
	I0108 21:40:58.391587   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e"
	I0108 21:40:58.445078   52240 logs.go:123] Gathering logs for storage-provisioner [60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c] ...
	I0108 21:40:58.445111   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c"
	I0108 21:40:58.486062   52240 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:40:58.486096   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:40:58.646469   52240 logs.go:123] Gathering logs for kube-apiserver [aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267] ...
	I0108 21:40:58.646511   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267"
	I0108 21:40:58.694854   52240 logs.go:123] Gathering logs for kube-proxy [ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1] ...
	I0108 21:40:58.694897   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1"
	I0108 21:40:58.739117   52240 logs.go:123] Gathering logs for storage-provisioner [82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5] ...
	I0108 21:40:58.739144   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5"
	I0108 21:40:58.779684   52240 logs.go:123] Gathering logs for kube-scheduler [18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b] ...
	I0108 21:40:58.779736   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b"
	I0108 21:40:58.821349   52240 logs.go:123] Gathering logs for kube-controller-manager [3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87] ...
	I0108 21:40:58.821378   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87"
	I0108 21:41:01.371732   52240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:41:01.388681   52240 api_server.go:72] duration metric: took 4m18.949191387s to wait for apiserver process to appear ...
	I0108 21:41:01.388707   52240 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:41:01.388749   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:41:01.388806   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:41:01.445198   52240 cri.go:89] found id: "aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267"
	I0108 21:41:01.445225   52240 cri.go:89] found id: ""
	I0108 21:41:01.445235   52240 logs.go:284] 1 containers: [aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267]
	I0108 21:41:01.445293   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:01.449708   52240 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:41:01.449783   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:41:01.494828   52240 cri.go:89] found id: "07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e"
	I0108 21:41:01.494856   52240 cri.go:89] found id: ""
	I0108 21:41:01.494868   52240 logs.go:284] 1 containers: [07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e]
	I0108 21:41:01.494934   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:01.499531   52240 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:41:01.499605   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:41:01.543726   52240 cri.go:89] found id: "040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320"
	I0108 21:41:01.543745   52240 cri.go:89] found id: ""
	I0108 21:41:01.543753   52240 logs.go:284] 1 containers: [040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320]
	I0108 21:41:01.543794   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:01.548382   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:41:01.548437   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:41:01.591433   52240 cri.go:89] found id: "18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b"
	I0108 21:41:01.591457   52240 cri.go:89] found id: ""
	I0108 21:41:01.591465   52240 logs.go:284] 1 containers: [18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b]
	I0108 21:41:01.591509   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:01.596334   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:41:01.596382   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:41:01.637730   52240 cri.go:89] found id: "ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1"
	I0108 21:41:01.637764   52240 cri.go:89] found id: ""
	I0108 21:41:01.637774   52240 logs.go:284] 1 containers: [ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1]
	I0108 21:41:01.637830   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:01.642318   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:41:01.642406   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:41:01.691845   52240 cri.go:89] found id: "3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87"
	I0108 21:41:01.691870   52240 cri.go:89] found id: ""
	I0108 21:41:01.691879   52240 logs.go:284] 1 containers: [3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87]
	I0108 21:41:01.691939   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:01.696340   52240 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:41:01.696407   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:41:01.736682   52240 cri.go:89] found id: ""
	I0108 21:41:01.736712   52240 logs.go:284] 0 containers: []
	W0108 21:41:01.736721   52240 logs.go:286] No container was found matching "kindnet"
	I0108 21:41:01.736728   52240 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:41:01.736809   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:41:01.780496   52240 cri.go:89] found id: "60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c"
	I0108 21:41:01.780519   52240 cri.go:89] found id: "82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5"
	I0108 21:41:01.780524   52240 cri.go:89] found id: ""
	I0108 21:41:01.780530   52240 logs.go:284] 2 containers: [60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c 82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5]
	I0108 21:41:01.780573   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:01.785885   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:01.790750   52240 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:41:01.790779   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:41:01.932145   52240 logs.go:123] Gathering logs for kube-proxy [ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1] ...
	I0108 21:41:01.932187   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1"
	I0108 21:41:01.981411   52240 logs.go:123] Gathering logs for storage-provisioner [60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c] ...
	I0108 21:41:01.981443   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c"
	I0108 21:41:02.055992   52240 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:41:02.056028   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:41:02.483190   52240 logs.go:123] Gathering logs for kube-scheduler [18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b] ...
	I0108 21:41:02.483234   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b"
	I0108 21:41:02.534877   52240 logs.go:123] Gathering logs for container status ...
	I0108 21:41:02.534926   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:41:02.585008   52240 logs.go:123] Gathering logs for dmesg ...
	I0108 21:41:02.585043   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:41:02.600166   52240 logs.go:123] Gathering logs for etcd [07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e] ...
	I0108 21:41:02.600194   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e"
	I0108 21:41:02.646047   52240 logs.go:123] Gathering logs for coredns [040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320] ...
	I0108 21:41:02.646081   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320"
	I0108 21:41:02.685166   52240 logs.go:123] Gathering logs for kube-controller-manager [3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87] ...
	I0108 21:41:02.685207   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87"
	I0108 21:41:02.747839   52240 logs.go:123] Gathering logs for kubelet ...
	I0108 21:41:02.747876   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 21:41:02.800844   52240 logs.go:123] Gathering logs for kube-apiserver [aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267] ...
	I0108 21:41:02.800893   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267"
	I0108 21:41:02.850097   52240 logs.go:123] Gathering logs for storage-provisioner [82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5] ...
	I0108 21:41:02.850131   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5"
	I0108 21:41:05.392493   52240 api_server.go:253] Checking apiserver healthz at https://192.168.39.142:8443/healthz ...
	I0108 21:41:05.399517   52240 api_server.go:279] https://192.168.39.142:8443/healthz returned 200:
	ok
	I0108 21:41:05.400950   52240 api_server.go:141] control plane version: v1.28.4
	I0108 21:41:05.400978   52240 api_server.go:131] duration metric: took 4.012262682s to wait for apiserver health ...
	I0108 21:41:05.400989   52240 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:41:05.401015   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0108 21:41:05.401078   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0108 21:41:05.445380   52240 cri.go:89] found id: "aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267"
	I0108 21:41:05.445409   52240 cri.go:89] found id: ""
	I0108 21:41:05.445418   52240 logs.go:284] 1 containers: [aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267]
	I0108 21:41:05.445475   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:05.450468   52240 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0108 21:41:05.450537   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0108 21:41:05.497915   52240 cri.go:89] found id: "07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e"
	I0108 21:41:05.497937   52240 cri.go:89] found id: ""
	I0108 21:41:05.497944   52240 logs.go:284] 1 containers: [07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e]
	I0108 21:41:05.497990   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:05.503732   52240 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0108 21:41:05.503804   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0108 21:41:05.545706   52240 cri.go:89] found id: "040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320"
	I0108 21:41:05.545725   52240 cri.go:89] found id: ""
	I0108 21:41:05.545732   52240 logs.go:284] 1 containers: [040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320]
	I0108 21:41:05.545786   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:05.550154   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0108 21:41:05.550247   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0108 21:41:05.600390   52240 cri.go:89] found id: "18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b"
	I0108 21:41:05.600414   52240 cri.go:89] found id: ""
	I0108 21:41:05.600421   52240 logs.go:284] 1 containers: [18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b]
	I0108 21:41:05.600464   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:05.605152   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0108 21:41:05.605230   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0108 21:41:05.653540   52240 cri.go:89] found id: "ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1"
	I0108 21:41:05.653560   52240 cri.go:89] found id: ""
	I0108 21:41:05.653572   52240 logs.go:284] 1 containers: [ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1]
	I0108 21:41:05.653630   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:05.658912   52240 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0108 21:41:05.658988   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0108 21:41:05.706264   52240 cri.go:89] found id: "3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87"
	I0108 21:41:05.706298   52240 cri.go:89] found id: ""
	I0108 21:41:05.706309   52240 logs.go:284] 1 containers: [3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87]
	I0108 21:41:05.706371   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:05.711775   52240 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0108 21:41:05.711891   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0108 21:41:05.762772   52240 cri.go:89] found id: ""
	I0108 21:41:05.762793   52240 logs.go:284] 0 containers: []
	W0108 21:41:05.762799   52240 logs.go:286] No container was found matching "kindnet"
	I0108 21:41:05.762805   52240 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0108 21:41:05.762850   52240 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0108 21:41:05.807892   52240 cri.go:89] found id: "60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c"
	I0108 21:41:05.807922   52240 cri.go:89] found id: "82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5"
	I0108 21:41:05.807928   52240 cri.go:89] found id: ""
	I0108 21:41:05.807938   52240 logs.go:284] 2 containers: [60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c 82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5]
	I0108 21:41:05.807993   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:05.812747   52240 ssh_runner.go:195] Run: which crictl
	I0108 21:41:05.817173   52240 logs.go:123] Gathering logs for kube-apiserver [aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267] ...
	I0108 21:41:05.817204   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267"
	I0108 21:41:05.869688   52240 logs.go:123] Gathering logs for kube-scheduler [18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b] ...
	I0108 21:41:05.869724   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b"
	I0108 21:41:05.923448   52240 logs.go:123] Gathering logs for etcd [07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e] ...
	I0108 21:41:05.923479   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e"
	I0108 21:41:05.977444   52240 logs.go:123] Gathering logs for coredns [040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320] ...
	I0108 21:41:05.977479   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320"
	I0108 21:41:06.025441   52240 logs.go:123] Gathering logs for kubelet ...
	I0108 21:41:06.025477   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0108 21:41:06.085846   52240 logs.go:123] Gathering logs for dmesg ...
	I0108 21:41:06.085886   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0108 21:41:06.102728   52240 logs.go:123] Gathering logs for storage-provisioner [82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5] ...
	I0108 21:41:06.102759   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5"
	I0108 21:41:06.157754   52240 logs.go:123] Gathering logs for CRI-O ...
	I0108 21:41:06.157791   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0108 21:41:06.519964   52240 logs.go:123] Gathering logs for kube-controller-manager [3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87] ...
	I0108 21:41:06.520006   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87"
	I0108 21:41:06.585875   52240 logs.go:123] Gathering logs for storage-provisioner [60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c] ...
	I0108 21:41:06.585913   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c"
	I0108 21:41:06.632506   52240 logs.go:123] Gathering logs for container status ...
	I0108 21:41:06.632547   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0108 21:41:06.689680   52240 logs.go:123] Gathering logs for describe nodes ...
	I0108 21:41:06.689718   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0108 21:41:06.834069   52240 logs.go:123] Gathering logs for kube-proxy [ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1] ...
	I0108 21:41:06.834106   52240 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1"
	I0108 21:41:09.391812   52240 system_pods.go:59] 8 kube-system pods found
	I0108 21:41:09.391848   52240 system_pods.go:61] "coredns-5dd5756b68-jlpx5" [a3128151-c8ce-44da-a192-3b4a2ae1e3f8] Running
	I0108 21:41:09.391857   52240 system_pods.go:61] "etcd-embed-certs-930023" [392e8e69-7cd2-4346-aa55-887d736dfc01] Running
	I0108 21:41:09.391863   52240 system_pods.go:61] "kube-apiserver-embed-certs-930023" [98bd475f-c413-40c0-b99c-fdcc29687925] Running
	I0108 21:41:09.391871   52240 system_pods.go:61] "kube-controller-manager-embed-certs-930023" [31dd08df-27c2-4ed0-8c42-03ff09294e06] Running
	I0108 21:41:09.391876   52240 system_pods.go:61] "kube-proxy-8qs2r" [ed301cf2-3f54-4b4c-880b-2fe829c81093] Running
	I0108 21:41:09.391882   52240 system_pods.go:61] "kube-scheduler-embed-certs-930023" [3041f9c9-d48b-4910-90ca-127f4b9e2485] Running
	I0108 21:41:09.391895   52240 system_pods.go:61] "metrics-server-57f55c9bc5-rj499" [5873675f-8a6c-4404-be01-b46763a62f5c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:41:09.391904   52240 system_pods.go:61] "storage-provisioner" [1ef46fa1-8048-4f26-b999-6b78c5450cb8] Running
	I0108 21:41:09.391915   52240 system_pods.go:74] duration metric: took 3.99091814s to wait for pod list to return data ...
	I0108 21:41:09.391925   52240 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:41:09.395754   52240 default_sa.go:45] found service account: "default"
	I0108 21:41:09.395782   52240 default_sa.go:55] duration metric: took 3.846027ms for default service account to be created ...
	I0108 21:41:09.395793   52240 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:41:09.401870   52240 system_pods.go:86] 8 kube-system pods found
	I0108 21:41:09.401910   52240 system_pods.go:89] "coredns-5dd5756b68-jlpx5" [a3128151-c8ce-44da-a192-3b4a2ae1e3f8] Running
	I0108 21:41:09.401921   52240 system_pods.go:89] "etcd-embed-certs-930023" [392e8e69-7cd2-4346-aa55-887d736dfc01] Running
	I0108 21:41:09.401927   52240 system_pods.go:89] "kube-apiserver-embed-certs-930023" [98bd475f-c413-40c0-b99c-fdcc29687925] Running
	I0108 21:41:09.401933   52240 system_pods.go:89] "kube-controller-manager-embed-certs-930023" [31dd08df-27c2-4ed0-8c42-03ff09294e06] Running
	I0108 21:41:09.401939   52240 system_pods.go:89] "kube-proxy-8qs2r" [ed301cf2-3f54-4b4c-880b-2fe829c81093] Running
	I0108 21:41:09.401945   52240 system_pods.go:89] "kube-scheduler-embed-certs-930023" [3041f9c9-d48b-4910-90ca-127f4b9e2485] Running
	I0108 21:41:09.401953   52240 system_pods.go:89] "metrics-server-57f55c9bc5-rj499" [5873675f-8a6c-4404-be01-b46763a62f5c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 21:41:09.401961   52240 system_pods.go:89] "storage-provisioner" [1ef46fa1-8048-4f26-b999-6b78c5450cb8] Running
	I0108 21:41:09.401975   52240 system_pods.go:126] duration metric: took 6.173877ms to wait for k8s-apps to be running ...
	I0108 21:41:09.401985   52240 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:41:09.402033   52240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:41:09.417840   52240 system_svc.go:56] duration metric: took 15.844851ms WaitForService to wait for kubelet.
	I0108 21:41:09.417873   52240 kubeadm.go:581] duration metric: took 4m26.97838865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:41:09.417897   52240 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:41:09.425730   52240 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:41:09.425760   52240 node_conditions.go:123] node cpu capacity is 2
	I0108 21:41:09.425769   52240 node_conditions.go:105] duration metric: took 7.866604ms to run NodePressure ...
	I0108 21:41:09.425780   52240 start.go:228] waiting for startup goroutines ...
	I0108 21:41:09.425786   52240 start.go:233] waiting for cluster config update ...
	I0108 21:41:09.425796   52240 start.go:242] writing updated cluster config ...
	I0108 21:41:09.426057   52240 ssh_runner.go:195] Run: rm -f paused
	I0108 21:41:09.479927   52240 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:41:09.482307   52240 out.go:177] * Done! kubectl is now configured to use "embed-certs-930023" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:26:52 UTC, ends at Mon 2024-01-08 21:41:53 UTC. --
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.453067755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750113453039374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=c11e9d12-4fa8-471b-a2dd-aeb29fef43ff name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.454289607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ca8a3be8-cff1-415f-bb33-3a57d6d6f17a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.454390679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ca8a3be8-cff1-415f-bb33-3a57d6d6f17a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.454706507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45705453af5ae5b66fe9aca07cbdf33f1eb74331544cb8c6b918baf29b1afab7,PodSandboxId:bebf1866ebff14c3fe21c5f4652f811f0361f806ef7c6223019526e130906a1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749582948160045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262224e-beec-4c9a-ab5e-4d8b5b5a84b5,},Annotations:map[string]string{io.kubernetes.container.hash: 1cd518b4,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994c72f3faac2926546003babbb131531ccb172dbf71c8aa0117e6d4cf97cdf,PodSandboxId:fa40a16f14d19d3ddc2db3188216da787cf4665dd90cf2086cada10123f131ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704749582381063345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lk26t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd54061-1f29-4beb-9d69-fa6b747e4946,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7a3d6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d578479c0833c49792a4077c4847e55f39e1d757282e57664033beca75fd16,PodSandboxId:417f377146dca48d55617cc810ee7063f4ff799e443dd896bb1b7d54c08f5c51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704749581558927516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mz6r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af44b760-04e8-461b-9bd7-36bf0c631744,},Annotations:map[string]string{io.kubernetes.container.hash: b5849d50,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4381ac4fa2a77bd0f2d375a7ea81e43db296ab73d61b7be8f8132f9016da43f3,PodSandboxId:6c0d7c10d4173cda932d557df1a6d14d921861e89d75516325121315c718df1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704749556305947610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4c3384bee006e499a0ca51cf09aab0,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73a08396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3dbd868cc4b827bbfb0ea8e65dd81f55752fea6cc9fb9edef7b01111e6ce582,PodSandboxId:7b7f52a434bf4c2fc1e5256cf3d55800ae18881b3e968ae9496ea81ba09c6fa4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704749555168083696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5445b5ed862b96c242f396b25654c4c0846fe3fb2ca220a5057d5cee6f07f608,PodSandboxId:fdebd71f762818c8178e55085d1d7e71b736f34fb7a2a0d7dfc6192b2335c0b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704749555019384637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b56319f04cebbe683f66f93d89f5de24f6b348a3e4af0aac7f85d75f4215bf34,PodSandboxId:cc9a4a04d750319a3aaf4bbddab49c23c04c786efb2380217bff44bc5b93f1d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704749554891904373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e8fe7c5cc6414547d23c83a669e5fc,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c060036a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ca8a3be8-cff1-415f-bb33-3a57d6d6f17a name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.502801309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6bbb7058-b6e1-4a42-87de-0ece9e604d03 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.502887221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6bbb7058-b6e1-4a42-87de-0ece9e604d03 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.504384024Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c3b5b526-0a12-4f3c-a45f-280169516e02 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.505047372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750113505026125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=c3b5b526-0a12-4f3c-a45f-280169516e02 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.505737137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=79283919-d015-4167-ac1a-47858e1c37ef name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.505841604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=79283919-d015-4167-ac1a-47858e1c37ef name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.506091518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45705453af5ae5b66fe9aca07cbdf33f1eb74331544cb8c6b918baf29b1afab7,PodSandboxId:bebf1866ebff14c3fe21c5f4652f811f0361f806ef7c6223019526e130906a1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749582948160045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262224e-beec-4c9a-ab5e-4d8b5b5a84b5,},Annotations:map[string]string{io.kubernetes.container.hash: 1cd518b4,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994c72f3faac2926546003babbb131531ccb172dbf71c8aa0117e6d4cf97cdf,PodSandboxId:fa40a16f14d19d3ddc2db3188216da787cf4665dd90cf2086cada10123f131ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704749582381063345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lk26t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd54061-1f29-4beb-9d69-fa6b747e4946,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7a3d6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d578479c0833c49792a4077c4847e55f39e1d757282e57664033beca75fd16,PodSandboxId:417f377146dca48d55617cc810ee7063f4ff799e443dd896bb1b7d54c08f5c51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704749581558927516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mz6r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af44b760-04e8-461b-9bd7-36bf0c631744,},Annotations:map[string]string{io.kubernetes.container.hash: b5849d50,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4381ac4fa2a77bd0f2d375a7ea81e43db296ab73d61b7be8f8132f9016da43f3,PodSandboxId:6c0d7c10d4173cda932d557df1a6d14d921861e89d75516325121315c718df1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704749556305947610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4c3384bee006e499a0ca51cf09aab0,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73a08396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3dbd868cc4b827bbfb0ea8e65dd81f55752fea6cc9fb9edef7b01111e6ce582,PodSandboxId:7b7f52a434bf4c2fc1e5256cf3d55800ae18881b3e968ae9496ea81ba09c6fa4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704749555168083696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5445b5ed862b96c242f396b25654c4c0846fe3fb2ca220a5057d5cee6f07f608,PodSandboxId:fdebd71f762818c8178e55085d1d7e71b736f34fb7a2a0d7dfc6192b2335c0b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704749555019384637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b56319f04cebbe683f66f93d89f5de24f6b348a3e4af0aac7f85d75f4215bf34,PodSandboxId:cc9a4a04d750319a3aaf4bbddab49c23c04c786efb2380217bff44bc5b93f1d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704749554891904373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e8fe7c5cc6414547d23c83a669e5fc,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c060036a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=79283919-d015-4167-ac1a-47858e1c37ef name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.554473836Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2c718f10-9bfb-41fa-8ce0-71d2f7707eba name=/runtime.v1.RuntimeService/Version
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.554596960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2c718f10-9bfb-41fa-8ce0-71d2f7707eba name=/runtime.v1.RuntimeService/Version
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.556871898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ccfde5f1-6061-4877-807b-4c19c2252524 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.557523870Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750113557502160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=ccfde5f1-6061-4877-807b-4c19c2252524 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.558565909Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5f6aff7b-3b0e-434b-a8d5-2a551fb8afbf name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.558739034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5f6aff7b-3b0e-434b-a8d5-2a551fb8afbf name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.559010271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45705453af5ae5b66fe9aca07cbdf33f1eb74331544cb8c6b918baf29b1afab7,PodSandboxId:bebf1866ebff14c3fe21c5f4652f811f0361f806ef7c6223019526e130906a1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749582948160045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262224e-beec-4c9a-ab5e-4d8b5b5a84b5,},Annotations:map[string]string{io.kubernetes.container.hash: 1cd518b4,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994c72f3faac2926546003babbb131531ccb172dbf71c8aa0117e6d4cf97cdf,PodSandboxId:fa40a16f14d19d3ddc2db3188216da787cf4665dd90cf2086cada10123f131ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704749582381063345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lk26t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd54061-1f29-4beb-9d69-fa6b747e4946,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7a3d6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d578479c0833c49792a4077c4847e55f39e1d757282e57664033beca75fd16,PodSandboxId:417f377146dca48d55617cc810ee7063f4ff799e443dd896bb1b7d54c08f5c51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704749581558927516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mz6r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af44b760-04e8-461b-9bd7-36bf0c631744,},Annotations:map[string]string{io.kubernetes.container.hash: b5849d50,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4381ac4fa2a77bd0f2d375a7ea81e43db296ab73d61b7be8f8132f9016da43f3,PodSandboxId:6c0d7c10d4173cda932d557df1a6d14d921861e89d75516325121315c718df1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704749556305947610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4c3384bee006e499a0ca51cf09aab0,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73a08396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3dbd868cc4b827bbfb0ea8e65dd81f55752fea6cc9fb9edef7b01111e6ce582,PodSandboxId:7b7f52a434bf4c2fc1e5256cf3d55800ae18881b3e968ae9496ea81ba09c6fa4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704749555168083696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5445b5ed862b96c242f396b25654c4c0846fe3fb2ca220a5057d5cee6f07f608,PodSandboxId:fdebd71f762818c8178e55085d1d7e71b736f34fb7a2a0d7dfc6192b2335c0b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704749555019384637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b56319f04cebbe683f66f93d89f5de24f6b348a3e4af0aac7f85d75f4215bf34,PodSandboxId:cc9a4a04d750319a3aaf4bbddab49c23c04c786efb2380217bff44bc5b93f1d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704749554891904373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e8fe7c5cc6414547d23c83a669e5fc,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c060036a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5f6aff7b-3b0e-434b-a8d5-2a551fb8afbf name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.602158152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=af388b0c-ff22-4722-9531-1223ee94311c name=/runtime.v1.RuntimeService/Version
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.602270134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=af388b0c-ff22-4722-9531-1223ee94311c name=/runtime.v1.RuntimeService/Version
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.604921656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b8a6448c-04f5-4899-b2b5-216b9790ea6c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.605771868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750113605607872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=b8a6448c-04f5-4899-b2b5-216b9790ea6c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.607087726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d1f37091-214e-4b0b-8238-7fc130c3b8af name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.607196920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d1f37091-214e-4b0b-8238-7fc130c3b8af name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:41:53 old-k8s-version-879273 crio[710]: time="2024-01-08 21:41:53.607438843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:45705453af5ae5b66fe9aca07cbdf33f1eb74331544cb8c6b918baf29b1afab7,PodSandboxId:bebf1866ebff14c3fe21c5f4652f811f0361f806ef7c6223019526e130906a1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749582948160045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a262224e-beec-4c9a-ab5e-4d8b5b5a84b5,},Annotations:map[string]string{io.kubernetes.container.hash: 1cd518b4,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6994c72f3faac2926546003babbb131531ccb172dbf71c8aa0117e6d4cf97cdf,PodSandboxId:fa40a16f14d19d3ddc2db3188216da787cf4665dd90cf2086cada10123f131ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1704749582381063345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lk26t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd54061-1f29-4beb-9d69-fa6b747e4946,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7a3d6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77d578479c0833c49792a4077c4847e55f39e1d757282e57664033beca75fd16,PodSandboxId:417f377146dca48d55617cc810ee7063f4ff799e443dd896bb1b7d54c08f5c51,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1704749581558927516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-mz6r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af44b760-04e8-461b-9bd7-36bf0c631744,},Annotations:map[string]string{io.kubernetes.container.hash: b5849d50,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4381ac4fa2a77bd0f2d375a7ea81e43db296ab73d61b7be8f8132f9016da43f3,PodSandboxId:6c0d7c10d4173cda932d557df1a6d14d921861e89d75516325121315c718df1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1704749556305947610,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4c3384bee006e499a0ca51cf09aab0,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 73a08396,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3dbd868cc4b827bbfb0ea8e65dd81f55752fea6cc9fb9edef7b01111e6ce582,PodSandboxId:7b7f52a434bf4c2fc1e5256cf3d55800ae18881b3e968ae9496ea81ba09c6fa4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1704749555168083696,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5445b5ed862b96c242f396b25654c4c0846fe3fb2ca220a5057d5cee6f07f608,PodSandboxId:fdebd71f762818c8178e55085d1d7e71b736f34fb7a2a0d7dfc6192b2335c0b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1704749555019384637,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b56319f04cebbe683f66f93d89f5de24f6b348a3e4af0aac7f85d75f4215bf34,PodSandboxId:cc9a4a04d750319a3aaf4bbddab49c23c04c786efb2380217bff44bc5b93f1d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1704749554891904373,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-879273,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e8fe7c5cc6414547d23c83a669e5fc,},Annotations:ma
p[string]string{io.kubernetes.container.hash: c060036a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d1f37091-214e-4b0b-8238-7fc130c3b8af name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	45705453af5ae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   8 minutes ago       Running             storage-provisioner       0                   bebf1866ebff1       storage-provisioner
	6994c72f3faac       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   8 minutes ago       Running             kube-proxy                0                   fa40a16f14d19       kube-proxy-lk26t
	77d578479c083       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   8 minutes ago       Running             coredns                   0                   417f377146dca       coredns-5644d7b6d9-mz6r2
	4381ac4fa2a77       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   9 minutes ago       Running             etcd                      0                   6c0d7c10d4173       etcd-old-k8s-version-879273
	a3dbd868cc4b8       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   9 minutes ago       Running             kube-scheduler            0                   7b7f52a434bf4       kube-scheduler-old-k8s-version-879273
	5445b5ed862b9       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   9 minutes ago       Running             kube-controller-manager   0                   fdebd71f76281       kube-controller-manager-old-k8s-version-879273
	b56319f04cebb       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   9 minutes ago       Running             kube-apiserver            0                   cc9a4a04d7503       kube-apiserver-old-k8s-version-879273
	
	
	==> coredns [77d578479c0833c49792a4077c4847e55f39e1d757282e57664033beca75fd16] <==
	.:53
	2024-01-08T21:33:02.031Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2024-01-08T21:33:02.031Z [INFO] CoreDNS-1.6.2
	2024-01-08T21:33:02.031Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2024-01-08T21:33:32.501Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	[INFO] Reloading complete
	2024-01-08T21:33:32.515Z [INFO] 127.0.0.1:36576 - 57875 "HINFO IN 419622387081155163.2360419095770267177. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015166891s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-879273
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-879273
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=old-k8s-version-879273
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_32_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:32:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:41:40 +0000   Mon, 08 Jan 2024 21:32:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:41:40 +0000   Mon, 08 Jan 2024 21:32:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:41:40 +0000   Mon, 08 Jan 2024 21:32:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:41:40 +0000   Mon, 08 Jan 2024 21:32:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.130
	  Hostname:    old-k8s-version-879273
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 b8d1541c76c64a00a2afcebf0c7336a6
	 System UUID:                b8d1541c-76c6-4a00-a2af-cebf0c7336a6
	 Boot ID:                    d5bc308c-d594-4c83-94f9-7f43c9981a97
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-mz6r2                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m54s
	  kube-system                etcd-old-k8s-version-879273                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                kube-apiserver-old-k8s-version-879273             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                kube-controller-manager-old-k8s-version-879273    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m59s
	  kube-system                kube-proxy-lk26t                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m54s
	  kube-system                kube-scheduler-old-k8s-version-879273             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  kube-system                metrics-server-74d5856cc6-fckkc                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         8m51s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  9m19s (x8 over 9m19s)  kubelet, old-k8s-version-879273     Node old-k8s-version-879273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s (x7 over 9m19s)  kubelet, old-k8s-version-879273     Node old-k8s-version-879273 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s (x8 over 9m19s)  kubelet, old-k8s-version-879273     Node old-k8s-version-879273 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m51s                  kube-proxy, old-k8s-version-879273  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan 8 21:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067981] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.391134] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.451050] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150459] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.500517] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.096566] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.108269] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.157357] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[Jan 8 21:27] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.215293] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[ +20.142529] systemd-fstab-generator[1028]: Ignoring "noauto" for root device
	[  +0.400797] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.847829] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.056769] kauditd_printk_skb: 2 callbacks suppressed
	[Jan 8 21:32] systemd-fstab-generator[3199]: Ignoring "noauto" for root device
	[  +1.682771] kauditd_printk_skb: 5 callbacks suppressed
	[Jan 8 21:33] kauditd_printk_skb: 11 callbacks suppressed
	[ +37.076163] hrtimer: interrupt took 5311058 ns
	
	
	==> etcd [4381ac4fa2a77bd0f2d375a7ea81e43db296ab73d61b7be8f8132f9016da43f3] <==
	2024-01-08 21:32:36.430529 I | raft: e74e8c49a87b232a became follower at term 0
	2024-01-08 21:32:36.430579 I | raft: newRaft e74e8c49a87b232a [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2024-01-08 21:32:36.430604 I | raft: e74e8c49a87b232a became follower at term 1
	2024-01-08 21:32:36.439582 W | auth: simple token is not cryptographically signed
	2024-01-08 21:32:36.446289 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-08 21:32:36.447886 I | etcdserver: e74e8c49a87b232a as single-node; fast-forwarding 9 ticks (election ticks 10)
	2024-01-08 21:32:36.448299 I | etcdserver/membership: added member e74e8c49a87b232a [https://192.168.61.130:2380] to cluster 58f57b06ecc7462a
	2024-01-08 21:32:36.450423 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-08 21:32:36.450775 I | embed: listening for metrics on http://192.168.61.130:2381
	2024-01-08 21:32:36.450972 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-08 21:32:37.131530 I | raft: e74e8c49a87b232a is starting a new election at term 1
	2024-01-08 21:32:37.131623 I | raft: e74e8c49a87b232a became candidate at term 2
	2024-01-08 21:32:37.131806 I | raft: e74e8c49a87b232a received MsgVoteResp from e74e8c49a87b232a at term 2
	2024-01-08 21:32:37.131924 I | raft: e74e8c49a87b232a became leader at term 2
	2024-01-08 21:32:37.131949 I | raft: raft.node: e74e8c49a87b232a elected leader e74e8c49a87b232a at term 2
	2024-01-08 21:32:37.132433 I | etcdserver: published {Name:old-k8s-version-879273 ClientURLs:[https://192.168.61.130:2379]} to cluster 58f57b06ecc7462a
	2024-01-08 21:32:37.132503 I | embed: ready to serve client requests
	2024-01-08 21:32:37.133338 I | etcdserver: setting up the initial cluster version to 3.3
	2024-01-08 21:32:37.133575 I | embed: ready to serve client requests
	2024-01-08 21:32:37.133869 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-08 21:32:37.136246 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-08 21:32:37.136470 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-08 21:32:37.140128 I | embed: serving client requests on 192.168.61.130:2379
	2024-01-08 21:33:01.766005 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (216.90547ms) to execute
	2024-01-08 21:33:02.008602 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" " with result "range_response_count:0 size:5" took too long (130.988867ms) to execute
	
	
	==> kernel <==
	 21:41:54 up 15 min,  0 users,  load average: 0.20, 0.18, 0.16
	Linux old-k8s-version-879273 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [b56319f04cebbe683f66f93d89f5de24f6b348a3e4af0aac7f85d75f4215bf34] <==
	I0108 21:34:03.278718       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:34:03.278877       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:34:03.278926       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:34:03.278933       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:36:03.279534       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:36:03.279990       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:36:03.280130       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:36:03.280161       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:37:41.405118       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:37:41.405704       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:37:41.405903       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:37:41.405989       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:38:41.406574       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:38:41.406814       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:38:41.406929       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:38:41.406962       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:40:41.407558       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0108 21:40:41.408079       1 handler_proxy.go:99] no RequestInfo found in the context
	E0108 21:40:41.408210       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:40:41.408241       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5445b5ed862b96c242f396b25654c4c0846fe3fb2ca220a5057d5cee6f07f608] <==
	E0108 21:35:31.482288       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:35:40.323102       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:36:01.734898       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:36:12.325837       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:36:31.987182       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:36:44.328108       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:37:02.240015       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:37:16.330326       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:37:32.492369       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:37:48.333222       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:38:02.745099       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:38:20.335832       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:38:32.997807       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:38:52.338134       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:39:03.250371       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:39:24.340498       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:39:33.502299       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:39:56.342588       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:40:03.755415       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:40:28.345163       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:40:34.007973       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:41:00.347517       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:41:04.260777       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0108 21:41:32.350433       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0108 21:41:34.513155       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [6994c72f3faac2926546003babbb131531ccb172dbf71c8aa0117e6d4cf97cdf] <==
	W0108 21:33:02.680153       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0108 21:33:02.700373       1 node.go:135] Successfully retrieved node IP: 192.168.61.130
	I0108 21:33:02.700456       1 server_others.go:149] Using iptables Proxier.
	I0108 21:33:02.701106       1 server.go:529] Version: v1.16.0
	I0108 21:33:02.710485       1 config.go:131] Starting endpoints config controller
	I0108 21:33:02.714961       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0108 21:33:02.711913       1 config.go:313] Starting service config controller
	I0108 21:33:02.715342       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0108 21:33:02.815845       1 shared_informer.go:204] Caches are synced for service config 
	I0108 21:33:02.816203       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [a3dbd868cc4b827bbfb0ea8e65dd81f55752fea6cc9fb9edef7b01111e6ce582] <==
	W0108 21:32:40.412036       1 authentication.go:79] Authentication is disabled
	I0108 21:32:40.412180       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0108 21:32:40.414095       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0108 21:32:40.480430       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:32:40.480753       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:32:40.480847       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:32:40.481328       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:32:40.481976       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:32:40.482707       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:32:40.483123       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:32:40.483216       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:32:40.483438       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:32:40.483952       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:32:40.487271       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:32:41.482899       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:32:41.485050       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:32:41.486866       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:32:41.489153       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:32:41.492361       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:32:41.494412       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:32:41.494580       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:32:41.495495       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:32:41.497838       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:32:41.497921       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:32:41.498844       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:26:52 UTC, ends at Mon 2024-01-08 21:41:54 UTC. --
	Jan 08 21:37:28 old-k8s-version-879273 kubelet[3217]: E0108 21:37:28.940460    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:37:34 old-k8s-version-879273 kubelet[3217]: E0108 21:37:34.012834    3217 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 08 21:37:39 old-k8s-version-879273 kubelet[3217]: E0108 21:37:39.940408    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:37:50 old-k8s-version-879273 kubelet[3217]: E0108 21:37:50.940318    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:38:02 old-k8s-version-879273 kubelet[3217]: E0108 21:38:02.939483    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:38:16 old-k8s-version-879273 kubelet[3217]: E0108 21:38:16.940076    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:38:31 old-k8s-version-879273 kubelet[3217]: E0108 21:38:31.940007    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:38:46 old-k8s-version-879273 kubelet[3217]: E0108 21:38:46.953892    3217 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 21:38:46 old-k8s-version-879273 kubelet[3217]: E0108 21:38:46.953998    3217 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 21:38:46 old-k8s-version-879273 kubelet[3217]: E0108 21:38:46.954069    3217 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 21:38:46 old-k8s-version-879273 kubelet[3217]: E0108 21:38:46.954117    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 08 21:39:00 old-k8s-version-879273 kubelet[3217]: E0108 21:39:00.940244    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:39:11 old-k8s-version-879273 kubelet[3217]: E0108 21:39:11.940833    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:39:24 old-k8s-version-879273 kubelet[3217]: E0108 21:39:24.939893    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:39:39 old-k8s-version-879273 kubelet[3217]: E0108 21:39:39.940467    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:39:50 old-k8s-version-879273 kubelet[3217]: E0108 21:39:50.939953    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:40:05 old-k8s-version-879273 kubelet[3217]: E0108 21:40:05.940839    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:40:18 old-k8s-version-879273 kubelet[3217]: E0108 21:40:18.940580    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:40:30 old-k8s-version-879273 kubelet[3217]: E0108 21:40:30.940820    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:40:44 old-k8s-version-879273 kubelet[3217]: E0108 21:40:44.941250    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:40:56 old-k8s-version-879273 kubelet[3217]: E0108 21:40:56.940017    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:41:11 old-k8s-version-879273 kubelet[3217]: E0108 21:41:11.940297    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:41:22 old-k8s-version-879273 kubelet[3217]: E0108 21:41:22.939896    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:41:34 old-k8s-version-879273 kubelet[3217]: E0108 21:41:34.940269    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 08 21:41:48 old-k8s-version-879273 kubelet[3217]: E0108 21:41:48.939964    3217 pod_workers.go:191] Error syncing pod 32c88827-5a4d-47f7-8484-bce82bfafdc8 ("metrics-server-74d5856cc6-fckkc_kube-system(32c88827-5a4d-47f7-8484-bce82bfafdc8)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [45705453af5ae5b66fe9aca07cbdf33f1eb74331544cb8c6b918baf29b1afab7] <==
	I0108 21:33:03.086351       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:33:03.100276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:33:03.100377       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:33:03.108497       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:33:03.109052       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-879273_bda6d268-2c09-4841-8bb3-648f5a0f7187!
	I0108 21:33:03.111217       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b172339d-113d-412b-b328-943d105cc612", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-879273_bda6d268-2c09-4841-8bb3-648f5a0f7187 became leader
	I0108 21:33:03.209769       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-879273_bda6d268-2c09-4841-8bb3-648f5a0f7187!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-879273 -n old-k8s-version-879273
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-879273 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-fckkc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-879273 describe pod metrics-server-74d5856cc6-fckkc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-879273 describe pod metrics-server-74d5856cc6-fckkc: exit status 1 (67.514471ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-fckkc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-879273 describe pod metrics-server-74d5856cc6-fckkc: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (468.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (508.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 21:39:26.820568   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 21:40:36.429804   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-420119 -n no-preload-420119
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-08 21:46:51.147493052 +0000 UTC m=+5835.072384420
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-420119 -n no-preload-420119
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-420119 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-420119 logs -n 25: (1.286661678s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p pause-046839                                        | pause-046839                 | jenkins | v1.32.0 | 08 Jan 24 21:17 UTC | 08 Jan 24 21:22 UTC |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-420119             | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-001550                              | cert-expiration-001550       | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:22 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-420119                  | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC | 08 Jan 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-001550                              | cert-expiration-001550       | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	| start   | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p pause-046839                                        | pause-046839                 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-216454 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	|         | disable-driver-mounts-216454                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:29 UTC |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-930023            | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-690577  | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC |                     |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-930023                 | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC | 08 Jan 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-690577       | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC | 08 Jan 24 21:40 UTC |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-879273                              | old-k8s-version-879273       | jenkins | v1.32.0 | 08 Jan 24 21:41 UTC | 08 Jan 24 21:41 UTC |
	| start   | -p newest-cni-233407 --memory=2200 --alsologtostderr   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:41 UTC | 08 Jan 24 21:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-233407             | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:42 UTC | 08 Jan 24 21:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233407                  | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233407 --memory=2200 --alsologtostderr   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:45 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:45:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:45:29.847210   55729 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:45:29.847367   55729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:45:29.847377   55729 out.go:309] Setting ErrFile to fd 2...
	I0108 21:45:29.847384   55729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:45:29.847600   55729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:45:29.848244   55729 out.go:303] Setting JSON to false
	I0108 21:45:29.849179   55729 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8854,"bootTime":1704741476,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:45:29.849241   55729 start.go:138] virtualization: kvm guest
	I0108 21:45:29.852012   55729 out.go:177] * [newest-cni-233407] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:45:29.853782   55729 notify.go:220] Checking for updates...
	I0108 21:45:29.855536   55729 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:45:29.857328   55729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:45:29.859130   55729 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:45:29.860918   55729 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:45:29.862490   55729 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:45:29.864197   55729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:45:29.866331   55729 config.go:182] Loaded profile config "newest-cni-233407": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:45:29.866819   55729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:45:29.866864   55729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:45:29.881615   55729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I0108 21:45:29.882073   55729 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:45:29.882586   55729 main.go:141] libmachine: Using API Version  1
	I0108 21:45:29.882610   55729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:45:29.882956   55729 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:45:29.883162   55729 main.go:141] libmachine: (newest-cni-233407) Calling .DriverName
	I0108 21:45:29.883384   55729 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:45:29.883660   55729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:45:29.883692   55729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:45:29.899068   55729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38461
	I0108 21:45:29.899519   55729 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:45:29.899984   55729 main.go:141] libmachine: Using API Version  1
	I0108 21:45:29.900006   55729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:45:29.900361   55729 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:45:29.900518   55729 main.go:141] libmachine: (newest-cni-233407) Calling .DriverName
	I0108 21:45:29.939985   55729 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 21:45:29.941348   55729 start.go:298] selected driver: kvm2
	I0108 21:45:29.941366   55729 start.go:902] validating driver "kvm2" against &{Name:newest-cni-233407 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-233407 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.145 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedule
dStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:45:29.941500   55729 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:45:29.942417   55729 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:45:29.942499   55729 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:45:29.958900   55729 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:45:29.959336   55729 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0108 21:45:29.959408   55729 cni.go:84] Creating CNI manager for ""
	I0108 21:45:29.959418   55729 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:45:29.959432   55729 start_flags.go:323] config:
	{Name:newest-cni-233407 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-233407 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.145 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:45:29.959638   55729 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:45:29.961542   55729 out.go:177] * Starting control plane node newest-cni-233407 in cluster newest-cni-233407
	I0108 21:45:29.963428   55729 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 21:45:29.963486   55729 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 21:45:29.963504   55729 cache.go:56] Caching tarball of preloaded images
	I0108 21:45:29.963610   55729 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:45:29.963622   55729 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0108 21:45:29.963738   55729 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/newest-cni-233407/config.json ...
	I0108 21:45:29.964076   55729 start.go:365] acquiring machines lock for newest-cni-233407: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:45:29.964170   55729 start.go:369] acquired machines lock for "newest-cni-233407" in 44.539µs
	I0108 21:45:29.964198   55729 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:45:29.964205   55729 fix.go:54] fixHost starting: 
	I0108 21:45:29.964508   55729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:45:29.964539   55729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:45:29.979905   55729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42927
	I0108 21:45:29.980380   55729 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:45:29.980904   55729 main.go:141] libmachine: Using API Version  1
	I0108 21:45:29.980926   55729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:45:29.981268   55729 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:45:29.981477   55729 main.go:141] libmachine: (newest-cni-233407) Calling .DriverName
	I0108 21:45:29.981620   55729 main.go:141] libmachine: (newest-cni-233407) Calling .GetState
	I0108 21:45:29.983321   55729 fix.go:102] recreateIfNeeded on newest-cni-233407: state=Running err=<nil>
	W0108 21:45:29.983357   55729 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:45:29.985847   55729 out.go:177] * Updating the running kvm2 "newest-cni-233407" VM ...
	I0108 21:45:29.987612   55729 machine.go:88] provisioning docker machine ...
	I0108 21:45:29.987663   55729 main.go:141] libmachine: (newest-cni-233407) Calling .DriverName
	I0108 21:45:29.987897   55729 main.go:141] libmachine: (newest-cni-233407) Calling .GetMachineName
	I0108 21:45:29.988072   55729 buildroot.go:166] provisioning hostname "newest-cni-233407"
	I0108 21:45:29.988113   55729 main.go:141] libmachine: (newest-cni-233407) Calling .GetMachineName
	I0108 21:45:29.988416   55729 main.go:141] libmachine: (newest-cni-233407) Calling .GetSSHHostname
	I0108 21:45:29.991261   55729 main.go:141] libmachine: (newest-cni-233407) DBG | domain newest-cni-233407 has defined MAC address 52:54:00:08:5b:36 in network mk-newest-cni-233407
	I0108 21:45:29.992235   55729 main.go:141] libmachine: (newest-cni-233407) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:5b:36", ip: ""} in network mk-newest-cni-233407: {Iface:virbr3 ExpiryTime:2024-01-08 22:42:12 +0000 UTC Type:0 Mac:52:54:00:08:5b:36 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:newest-cni-233407 Clientid:01:52:54:00:08:5b:36}
	I0108 21:45:29.992362   55729 main.go:141] libmachine: (newest-cni-233407) Calling .GetSSHPort
	I0108 21:45:29.992269   55729 main.go:141] libmachine: (newest-cni-233407) DBG | domain newest-cni-233407 has defined IP address 192.168.61.145 and MAC address 52:54:00:08:5b:36 in network mk-newest-cni-233407
	I0108 21:45:29.993667   55729 main.go:141] libmachine: (newest-cni-233407) Calling .GetSSHKeyPath
	I0108 21:45:29.994103   55729 main.go:141] libmachine: (newest-cni-233407) Calling .GetSSHKeyPath
	I0108 21:45:29.994259   55729 main.go:141] libmachine: (newest-cni-233407) Calling .GetSSHUsername
	I0108 21:45:29.994466   55729 main.go:141] libmachine: Using SSH client type: native
	I0108 21:45:29.994781   55729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0108 21:45:29.994796   55729 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-233407 && echo "newest-cni-233407" | sudo tee /etc/hostname
	I0108 21:45:32.864383   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:45:35.932448   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:45:42.016394   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:45:45.088449   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:45:51.164394   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:45:54.236339   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:03.356327   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:06.428441   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:12.508379   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:15.584305   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:21.664316   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:24.736367   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:30.812360   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:33.884433   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:39.968369   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:43.036337   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:49.116353   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:28:07 UTC, ends at Mon 2024-01-08 21:46:52 UTC. --
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.859312746Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750411859299125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d0dd5f9b-223c-4e1e-b1c1-54a0975feabd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.859960775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a84011d9-0177-4c18-82f9-0e9cdfd09272 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.860035851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a84011d9-0177-4c18-82f9-0e9cdfd09272 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.860320987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731,PodSandboxId:e94f948f1d838ad6d038180f5e88395cc1de5c0103024e03ce6b9eaa9c72a26c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704749629076678099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24c8545-1e62-4aa0-b8ae-351115323e3c,},Annotations:map[string]string{io.kubernetes.container.hash: caf4ad2d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21,PodSandboxId:501b0e9a00146c6c0da4523cac45556a96c03884c026e8bd61ba86e690d9b607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704749628591073761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5jpjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b66e29-32aa-4fc1-aa5f-18d774c4e374,},Annotations:map[string]string{io.kubernetes.container.hash: 458fc86a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69,PodSandboxId:d5f9ca3f151f3597675e76026eb1413f51815858d282eed3959296b539e406c4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704749627665519326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxmhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: a48789b6-fff3-4280-a96a-9d6595e5b8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65511e57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b,PodSandboxId:fa06432c9da687189e0ca950ea2cdbdb6d4997b08464fc0f01a1d739d000e78b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704749606739550080,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bc75921c0d4bf3b58532134c05e5edd,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29,PodSandboxId:17979811949a9ecab14938f4b469090d479ff1462d52d50f3217cc6963c76f38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704749606378815000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52cab930
852502f9e8d255e9901ba9,},Annotations:map[string]string{io.kubernetes.container.hash: b4adb560,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972,PodSandboxId:6af2634451427ec5653f3c5e651e53f619bc1e646258c0c4cc5a3c0bd8e5b4c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704749606340239646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: ad0e89e0a7ec08b5c3a24c1a9559b679,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f,PodSandboxId:17d1a66cc916b966f67987fed088e88974de5cb6faea430f7e1fa3885177f6cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704749605981207161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccce908958281a684cb739ff3583fee1,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1f2143a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a84011d9-0177-4c18-82f9-0e9cdfd09272 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.901648395Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d5210c09-8ca3-43fd-980f-6adc8ff41ea6 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.901733921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d5210c09-8ca3-43fd-980f-6adc8ff41ea6 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.902987111Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d4e49c24-9bed-4708-8dea-ace9a89a5137 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.903388634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750411903375332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d4e49c24-9bed-4708-8dea-ace9a89a5137 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.904391891Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4997ee62-a74a-4c6c-b4ad-39faaf45b726 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.904482960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4997ee62-a74a-4c6c-b4ad-39faaf45b726 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.904706832Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731,PodSandboxId:e94f948f1d838ad6d038180f5e88395cc1de5c0103024e03ce6b9eaa9c72a26c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704749629076678099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24c8545-1e62-4aa0-b8ae-351115323e3c,},Annotations:map[string]string{io.kubernetes.container.hash: caf4ad2d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21,PodSandboxId:501b0e9a00146c6c0da4523cac45556a96c03884c026e8bd61ba86e690d9b607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704749628591073761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5jpjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b66e29-32aa-4fc1-aa5f-18d774c4e374,},Annotations:map[string]string{io.kubernetes.container.hash: 458fc86a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69,PodSandboxId:d5f9ca3f151f3597675e76026eb1413f51815858d282eed3959296b539e406c4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704749627665519326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxmhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: a48789b6-fff3-4280-a96a-9d6595e5b8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65511e57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b,PodSandboxId:fa06432c9da687189e0ca950ea2cdbdb6d4997b08464fc0f01a1d739d000e78b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704749606739550080,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bc75921c0d4bf3b58532134c05e5edd,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29,PodSandboxId:17979811949a9ecab14938f4b469090d479ff1462d52d50f3217cc6963c76f38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704749606378815000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52cab930
852502f9e8d255e9901ba9,},Annotations:map[string]string{io.kubernetes.container.hash: b4adb560,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972,PodSandboxId:6af2634451427ec5653f3c5e651e53f619bc1e646258c0c4cc5a3c0bd8e5b4c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704749606340239646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: ad0e89e0a7ec08b5c3a24c1a9559b679,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f,PodSandboxId:17d1a66cc916b966f67987fed088e88974de5cb6faea430f7e1fa3885177f6cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704749605981207161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccce908958281a684cb739ff3583fee1,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1f2143a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4997ee62-a74a-4c6c-b4ad-39faaf45b726 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.947991603Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ca36b489-1ebe-4a8c-9548-1ed2ec009dca name=/runtime.v1.RuntimeService/Version
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.948163828Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ca36b489-1ebe-4a8c-9548-1ed2ec009dca name=/runtime.v1.RuntimeService/Version
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.949953615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d532f8c7-7b4b-41c0-9711-254d07768085 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.950414185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750411950397122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d532f8c7-7b4b-41c0-9711-254d07768085 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.951289416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e5511688-3bf1-454c-9329-5a26d0fdff28 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.951368221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e5511688-3bf1-454c-9329-5a26d0fdff28 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.951539490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731,PodSandboxId:e94f948f1d838ad6d038180f5e88395cc1de5c0103024e03ce6b9eaa9c72a26c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704749629076678099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24c8545-1e62-4aa0-b8ae-351115323e3c,},Annotations:map[string]string{io.kubernetes.container.hash: caf4ad2d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21,PodSandboxId:501b0e9a00146c6c0da4523cac45556a96c03884c026e8bd61ba86e690d9b607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704749628591073761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5jpjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b66e29-32aa-4fc1-aa5f-18d774c4e374,},Annotations:map[string]string{io.kubernetes.container.hash: 458fc86a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69,PodSandboxId:d5f9ca3f151f3597675e76026eb1413f51815858d282eed3959296b539e406c4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704749627665519326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxmhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: a48789b6-fff3-4280-a96a-9d6595e5b8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65511e57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b,PodSandboxId:fa06432c9da687189e0ca950ea2cdbdb6d4997b08464fc0f01a1d739d000e78b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704749606739550080,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bc75921c0d4bf3b58532134c05e5edd,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29,PodSandboxId:17979811949a9ecab14938f4b469090d479ff1462d52d50f3217cc6963c76f38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704749606378815000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52cab930
852502f9e8d255e9901ba9,},Annotations:map[string]string{io.kubernetes.container.hash: b4adb560,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972,PodSandboxId:6af2634451427ec5653f3c5e651e53f619bc1e646258c0c4cc5a3c0bd8e5b4c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704749606340239646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: ad0e89e0a7ec08b5c3a24c1a9559b679,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f,PodSandboxId:17d1a66cc916b966f67987fed088e88974de5cb6faea430f7e1fa3885177f6cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704749605981207161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccce908958281a684cb739ff3583fee1,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1f2143a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e5511688-3bf1-454c-9329-5a26d0fdff28 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.988804141Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4e8d7bcc-8a1c-4193-a8d2-1562d8899ce8 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.988957531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4e8d7bcc-8a1c-4193-a8d2-1562d8899ce8 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.990428893Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7b0320f7-0d7c-453d-ab8d-5ccb78762f93 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.990968394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750411990905346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=7b0320f7-0d7c-453d-ab8d-5ccb78762f93 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.991915004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a5e4a269-80ff-40c0-b86d-b6d3b16be632 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.991991038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a5e4a269-80ff-40c0-b86d-b6d3b16be632 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:46:51 no-preload-420119 crio[713]: time="2024-01-08 21:46:51.992256635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731,PodSandboxId:e94f948f1d838ad6d038180f5e88395cc1de5c0103024e03ce6b9eaa9c72a26c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1704749629076678099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e24c8545-1e62-4aa0-b8ae-351115323e3c,},Annotations:map[string]string{io.kubernetes.container.hash: caf4ad2d,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21,PodSandboxId:501b0e9a00146c6c0da4523cac45556a96c03884c026e8bd61ba86e690d9b607,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1704749628591073761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5jpjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b66e29-32aa-4fc1-aa5f-18d774c4e374,},Annotations:map[string]string{io.kubernetes.container.hash: 458fc86a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69,PodSandboxId:d5f9ca3f151f3597675e76026eb1413f51815858d282eed3959296b539e406c4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1704749627665519326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pxmhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: a48789b6-fff3-4280-a96a-9d6595e5b8f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65511e57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b,PodSandboxId:fa06432c9da687189e0ca950ea2cdbdb6d4997b08464fc0f01a1d739d000e78b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1704749606739550080,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
7bc75921c0d4bf3b58532134c05e5edd,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29,PodSandboxId:17979811949a9ecab14938f4b469090d479ff1462d52d50f3217cc6963c76f38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1704749606378815000,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c52cab930
852502f9e8d255e9901ba9,},Annotations:map[string]string{io.kubernetes.container.hash: b4adb560,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972,PodSandboxId:6af2634451427ec5653f3c5e651e53f619bc1e646258c0c4cc5a3c0bd8e5b4c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1704749606340239646,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: ad0e89e0a7ec08b5c3a24c1a9559b679,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f,PodSandboxId:17d1a66cc916b966f67987fed088e88974de5cb6faea430f7e1fa3885177f6cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1704749605981207161,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-420119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccce908958281a684cb739ff3583fee1,},A
nnotations:map[string]string{io.kubernetes.container.hash: c1f2143a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a5e4a269-80ff-40c0-b86d-b6d3b16be632 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	14e9230c0bc2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   e94f948f1d838       storage-provisioner
	25bb904eb9c30       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   501b0e9a00146       coredns-76f75df574-5jpjt
	85a9596516807       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   13 minutes ago      Running             kube-proxy                0                   d5f9ca3f151f3       kube-proxy-pxmhr
	c795b2e5797a0       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   13 minutes ago      Running             kube-scheduler            2                   fa06432c9da68       kube-scheduler-no-preload-420119
	32edd011a5fef       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   13 minutes ago      Running             kube-apiserver            2                   17979811949a9       kube-apiserver-no-preload-420119
	9ca9aabdd59b4       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   13 minutes ago      Running             kube-controller-manager   2                   6af2634451427       kube-controller-manager-no-preload-420119
	6fde134cfa736       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   13 minutes ago      Running             etcd                      2                   17d1a66cc916b       etcd-no-preload-420119
	
	
	==> coredns [25bb904eb9c30dca76b20c70dd3ef0a849884710ee5ae4d44c3fc10cea41cb21] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44003 - 38529 "HINFO IN 6349450318507821817.3758731084071288481. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017042411s
	
	
	==> describe nodes <==
	Name:               no-preload-420119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-420119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=no-preload-420119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_33_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:33:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-420119
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:46:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:44:04 +0000   Mon, 08 Jan 2024 21:33:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:44:04 +0000   Mon, 08 Jan 2024 21:33:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:44:04 +0000   Mon, 08 Jan 2024 21:33:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:44:04 +0000   Mon, 08 Jan 2024 21:33:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.226
	  Hostname:    no-preload-420119
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ed834d48b204e5097d96bd87425174d
	  System UUID:                3ed834d4-8b20-4e50-97d9-6bd87425174d
	  Boot ID:                    ca8890ce-fc22-4c2d-95fd-bbf17e8ee1c2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-5jpjt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-420119                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-420119             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-420119    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-pxmhr                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-420119             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-hs8c4              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node no-preload-420119 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node no-preload-420119 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node no-preload-420119 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m   kubelet          Node no-preload-420119 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m   kubelet          Node no-preload-420119 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node no-preload-420119 event: Registered Node no-preload-420119 in Controller
	
	
	==> dmesg <==
	[Jan 8 21:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073259] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jan 8 21:28] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.763056] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.171857] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.872792] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.129544] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.130681] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.184387] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.112927] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.253493] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +30.176908] systemd-fstab-generator[1329]: Ignoring "noauto" for root device
	[Jan 8 21:29] kauditd_printk_skb: 29 callbacks suppressed
	[Jan 8 21:33] systemd-fstab-generator[3913]: Ignoring "noauto" for root device
	[  +8.851936] systemd-fstab-generator[4240]: Ignoring "noauto" for root device
	[ +14.412755] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [6fde134cfa7361b19cf922d19a8a77735f41e232a3423459bc1bb0cee775db2f] <==
	{"level":"info","ts":"2024-01-08T21:33:27.393474Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a4ee1dc8fb7f7b07","local-member-attributes":"{Name:no-preload-420119 ClientURLs:[https://192.168.83.226:2379]}","request-path":"/0/members/a4ee1dc8fb7f7b07/attributes","cluster-id":"c80c007592d68f00","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:33:27.393552Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:33:27.393905Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:33:27.398694Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:33:27.40078Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:33:27.401865Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:33:27.416222Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c80c007592d68f00","local-member-id":"a4ee1dc8fb7f7b07","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:33:27.416739Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:33:27.416871Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:33:27.421344Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:33:27.44304Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.226:2379"}
	{"level":"warn","ts":"2024-01-08T21:35:57.872689Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.852164ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-hs8c4\" ","response":"range_response_count:1 size:4238"}
	{"level":"info","ts":"2024-01-08T21:35:57.873322Z","caller":"traceutil/trace.go:171","msg":"trace[1949572502] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-hs8c4; range_end:; response_count:1; response_revision:558; }","duration":"177.585072ms","start":"2024-01-08T21:35:57.6957Z","end":"2024-01-08T21:35:57.873285Z","steps":["trace[1949572502] 'range keys from in-memory index tree'  (duration: 176.76019ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:35:58.340361Z","caller":"traceutil/trace.go:171","msg":"trace[2006251662] linearizableReadLoop","detail":"{readStateIndex:600; appliedIndex:599; }","duration":"143.998629ms","start":"2024-01-08T21:35:58.196347Z","end":"2024-01-08T21:35:58.340346Z","steps":["trace[2006251662] 'read index received'  (duration: 143.754706ms)","trace[2006251662] 'applied index is now lower than readState.Index'  (duration: 243.483µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:35:58.340536Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.198323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-hs8c4\" ","response":"range_response_count:1 size:4238"}
	{"level":"info","ts":"2024-01-08T21:35:58.34056Z","caller":"traceutil/trace.go:171","msg":"trace[76319083] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-hs8c4; range_end:; response_count:1; response_revision:559; }","duration":"144.236226ms","start":"2024-01-08T21:35:58.196317Z","end":"2024-01-08T21:35:58.340553Z","steps":["trace[76319083] 'agreement among raft nodes before linearized reading'  (duration: 144.124793ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:35:58.340751Z","caller":"traceutil/trace.go:171","msg":"trace[354969599] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"273.156417ms","start":"2024-01-08T21:35:58.067588Z","end":"2024-01-08T21:35:58.340745Z","steps":["trace[354969599] 'process raft request'  (duration: 272.623165ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:36:22.664528Z","caller":"traceutil/trace.go:171","msg":"trace[53704162] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"115.107617ms","start":"2024-01-08T21:36:22.549386Z","end":"2024-01-08T21:36:22.664494Z","steps":["trace[53704162] 'process raft request'  (duration: 114.891097ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:42:27.292204Z","caller":"traceutil/trace.go:171","msg":"trace[1406911014] linearizableReadLoop","detail":"{readStateIndex:993; appliedIndex:992; }","duration":"157.312174ms","start":"2024-01-08T21:42:27.134741Z","end":"2024-01-08T21:42:27.292053Z","steps":["trace[1406911014] 'read index received'  (duration: 157.033241ms)","trace[1406911014] 'applied index is now lower than readState.Index'  (duration: 278.239µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:42:27.292477Z","caller":"traceutil/trace.go:171","msg":"trace[1032896026] transaction","detail":"{read_only:false; response_revision:874; number_of_response:1; }","duration":"233.864935ms","start":"2024-01-08T21:42:27.058593Z","end":"2024-01-08T21:42:27.292458Z","steps":["trace[1032896026] 'process raft request'  (duration: 233.23279ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:42:27.292733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.884425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:42:27.292819Z","caller":"traceutil/trace.go:171","msg":"trace[954843386] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:874; }","duration":"158.091978ms","start":"2024-01-08T21:42:27.134715Z","end":"2024-01-08T21:42:27.292807Z","steps":["trace[954843386] 'agreement among raft nodes before linearized reading'  (duration: 157.815658ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:43:28.037004Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":680}
	{"level":"info","ts":"2024-01-08T21:43:28.040336Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":680,"took":"2.812053ms","hash":2226449464}
	{"level":"info","ts":"2024-01-08T21:43:28.040407Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2226449464,"revision":680,"compact-revision":-1}
	
	
	==> kernel <==
	 21:46:52 up 18 min,  0 users,  load average: 0.13, 0.20, 0.22
	Linux no-preload-420119 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [32edd011a5fef7629b0be96fef7af71650fb7f1a5987ee9f3da66fb3de1fbe29] <==
	I0108 21:41:31.048651       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:43:30.050990       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:43:30.051495       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0108 21:43:31.051700       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:43:31.051795       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:43:31.051817       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:43:31.051900       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:43:31.052034       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:43:31.053339       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:44:31.052580       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:44:31.052669       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:44:31.052679       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:44:31.053887       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:44:31.054069       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:44:31.054255       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:46:31.053177       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:46:31.053252       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:46:31.053261       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:46:31.054490       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:46:31.054681       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:46:31.054734       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9ca9aabdd59b493d56b55a07d84406375aa6971149a6a55d5697155c97ee6972] <==
	I0108 21:41:16.058254       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:41:45.552022       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:41:46.067910       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:42:15.558666       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:42:16.080805       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:42:45.565091       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:42:46.096983       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:43:15.572507       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:43:16.106931       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:43:45.579404       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:43:46.118735       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:44:15.588655       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:44:16.130506       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:44:45.596364       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:44:46.139051       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 21:44:50.784623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="213.373µs"
	I0108 21:45:01.788963       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="228.446µs"
	E0108 21:45:15.601714       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:45:16.148075       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:45:45.608614       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:45:46.157082       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:46:15.614280       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:46:16.165692       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:46:45.624711       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:46:46.175180       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [85a95965168076953889229eb5323ea817c4b331799b10e867394b3aa7278e69] <==
	I0108 21:33:48.926349       1 server_others.go:72] "Using iptables proxy"
	I0108 21:33:49.002092       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.83.226"]
	I0108 21:33:49.153001       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0108 21:33:49.153083       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:33:49.153405       1 server_others.go:168] "Using iptables Proxier"
	I0108 21:33:49.157318       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:33:49.157705       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0108 21:33:49.157750       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:33:49.159009       1 config.go:188] "Starting service config controller"
	I0108 21:33:49.159066       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:33:49.159223       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:33:49.159251       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:33:49.161903       1 config.go:315] "Starting node config controller"
	I0108 21:33:49.161950       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:33:49.260620       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:33:49.266250       1 shared_informer.go:318] Caches are synced for node config
	I0108 21:33:49.266317       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c795b2e5797a05556f15b66a747f92f8266d24041cd853c457c1f9bf450c6b8b] <==
	W0108 21:33:30.930307       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:33:30.930425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:33:31.013474       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:33:31.013665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:33:31.098574       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:33:31.098674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:33:31.137351       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0108 21:33:31.137453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0108 21:33:31.155527       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 21:33:31.155631       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 21:33:31.222022       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:33:31.222203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:33:31.230776       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:33:31.230837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:33:31.280322       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:33:31.280379       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 21:33:31.342503       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:33:31.342653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 21:33:31.435467       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:33:31.435525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:33:31.578881       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:33:31.578985       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:33:31.598681       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0108 21:33:31.598810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0108 21:33:33.690235       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:28:07 UTC, ends at Mon 2024-01-08 21:46:52 UTC. --
	Jan 08 21:44:33 no-preload-420119 kubelet[4247]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:44:33 no-preload-420119 kubelet[4247]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:44:33 no-preload-420119 kubelet[4247]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:44:38 no-preload-420119 kubelet[4247]: E0108 21:44:38.786578    4247 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 08 21:44:38 no-preload-420119 kubelet[4247]: E0108 21:44:38.786624    4247 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 08 21:44:38 no-preload-420119 kubelet[4247]: E0108 21:44:38.786827    4247 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ch2kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-hs8c4_kube-system(84ed3a25-aa09-43c0-b994-e6dec44965ba): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 21:44:38 no-preload-420119 kubelet[4247]: E0108 21:44:38.786867    4247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-hs8c4" podUID="84ed3a25-aa09-43c0-b994-e6dec44965ba"
	Jan 08 21:44:50 no-preload-420119 kubelet[4247]: E0108 21:44:50.763321    4247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hs8c4" podUID="84ed3a25-aa09-43c0-b994-e6dec44965ba"
	Jan 08 21:45:01 no-preload-420119 kubelet[4247]: E0108 21:45:01.766468    4247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hs8c4" podUID="84ed3a25-aa09-43c0-b994-e6dec44965ba"
	Jan 08 21:45:13 no-preload-420119 kubelet[4247]: E0108 21:45:13.762069    4247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hs8c4" podUID="84ed3a25-aa09-43c0-b994-e6dec44965ba"
	Jan 08 21:45:26 no-preload-420119 kubelet[4247]: E0108 21:45:26.762383    4247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hs8c4" podUID="84ed3a25-aa09-43c0-b994-e6dec44965ba"
	Jan 08 21:45:33 no-preload-420119 kubelet[4247]: E0108 21:45:33.889879    4247 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:45:33 no-preload-420119 kubelet[4247]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:45:33 no-preload-420119 kubelet[4247]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:45:33 no-preload-420119 kubelet[4247]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:45:38 no-preload-420119 kubelet[4247]: E0108 21:45:38.762907    4247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hs8c4" podUID="84ed3a25-aa09-43c0-b994-e6dec44965ba"
	Jan 08 21:45:52 no-preload-420119 kubelet[4247]: E0108 21:45:52.761843    4247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hs8c4" podUID="84ed3a25-aa09-43c0-b994-e6dec44965ba"
	Jan 08 21:46:03 no-preload-420119 kubelet[4247]: E0108 21:46:03.762322    4247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hs8c4" podUID="84ed3a25-aa09-43c0-b994-e6dec44965ba"
	Jan 08 21:46:14 no-preload-420119 kubelet[4247]: E0108 21:46:14.762387    4247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hs8c4" podUID="84ed3a25-aa09-43c0-b994-e6dec44965ba"
	Jan 08 21:46:26 no-preload-420119 kubelet[4247]: E0108 21:46:26.763441    4247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hs8c4" podUID="84ed3a25-aa09-43c0-b994-e6dec44965ba"
	Jan 08 21:46:33 no-preload-420119 kubelet[4247]: E0108 21:46:33.892474    4247 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:46:33 no-preload-420119 kubelet[4247]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:46:33 no-preload-420119 kubelet[4247]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:46:33 no-preload-420119 kubelet[4247]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:46:39 no-preload-420119 kubelet[4247]: E0108 21:46:39.763231    4247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-hs8c4" podUID="84ed3a25-aa09-43c0-b994-e6dec44965ba"
	
	
	==> storage-provisioner [14e9230c0bc2ac68faaf41e5f6b743c7f7b8081d042211051fdad7855f135731] <==
	I0108 21:33:49.226881       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:33:49.240628       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:33:49.240778       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:33:49.255928       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:33:49.256263       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-420119_a19928f7-e595-427f-bfc2-4571c6fc64e8!
	I0108 21:33:49.262855       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90970e51-ce4d-4879-8dea-aaf8217fbe72", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-420119_a19928f7-e595-427f-bfc2-4571c6fc64e8 became leader
	I0108 21:33:49.357305       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-420119_a19928f7-e595-427f-bfc2-4571c6fc64e8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-420119 -n no-preload-420119
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-420119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-hs8c4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-420119 describe pod metrics-server-57f55c9bc5-hs8c4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-420119 describe pod metrics-server-57f55c9bc5-hs8c4: exit status 1 (66.740511ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-hs8c4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-420119 describe pod metrics-server-57f55c9bc5-hs8c4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (508.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 21:40:49.871282   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 21:41:04.516914   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-08 21:49:43.05954277 +0000 UTC m=+6006.984434119
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-690577 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-690577 logs -n 25: (1.291793857s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-001550                              | cert-expiration-001550       | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:22 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-420119                  | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC | 08 Jan 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-001550                              | cert-expiration-001550       | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	| start   | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p pause-046839                                        | pause-046839                 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-216454 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	|         | disable-driver-mounts-216454                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:29 UTC |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-930023            | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-690577  | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC |                     |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-930023                 | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC | 08 Jan 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-690577       | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC | 08 Jan 24 21:40 UTC |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-879273                              | old-k8s-version-879273       | jenkins | v1.32.0 | 08 Jan 24 21:41 UTC | 08 Jan 24 21:41 UTC |
	| start   | -p newest-cni-233407 --memory=2200 --alsologtostderr   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:41 UTC | 08 Jan 24 21:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-233407             | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:42 UTC | 08 Jan 24 21:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233407                  | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233407 --memory=2200 --alsologtostderr   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:45 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:46 UTC | 08 Jan 24 21:46 UTC |
	| start   | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:46:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:46:54.128129   56171 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:46:54.128420   56171 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:46:54.128430   56171 out.go:309] Setting ErrFile to fd 2...
	I0108 21:46:54.128434   56171 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:46:54.128621   56171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:46:54.129203   56171 out.go:303] Setting JSON to false
	I0108 21:46:54.130171   56171 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8938,"bootTime":1704741476,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:46:54.130230   56171 start.go:138] virtualization: kvm guest
	I0108 21:46:54.132720   56171 out.go:177] * [kubernetes-upgrade-862639] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:46:54.134371   56171 notify.go:220] Checking for updates...
	I0108 21:46:54.134382   56171 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:46:54.136001   56171 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:46:54.137642   56171 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:46:54.139511   56171 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:46:54.141231   56171 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:46:54.142658   56171 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:46:54.144661   56171 config.go:182] Loaded profile config "default-k8s-diff-port-690577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:46:54.144813   56171 config.go:182] Loaded profile config "embed-certs-930023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:46:54.144958   56171 config.go:182] Loaded profile config "newest-cni-233407": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:46:54.145056   56171 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:46:54.182339   56171 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 21:46:54.183934   56171 start.go:298] selected driver: kvm2
	I0108 21:46:54.183952   56171 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:46:54.183967   56171 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:46:54.184676   56171 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:46:54.184780   56171 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:46:54.199679   56171 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:46:54.199728   56171 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 21:46:54.199993   56171 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 21:46:54.200051   56171 cni.go:84] Creating CNI manager for ""
	I0108 21:46:54.200064   56171 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:46:54.200075   56171 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 21:46:54.200081   56171 start_flags.go:323] config:
	{Name:kubernetes-upgrade-862639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-862639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:46:54.200247   56171 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:46:54.202409   56171 out.go:177] * Starting control plane node kubernetes-upgrade-862639 in cluster kubernetes-upgrade-862639
	I0108 21:46:52.192359   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:54.203904   56171 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 21:46:54.203953   56171 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0108 21:46:54.203963   56171 cache.go:56] Caching tarball of preloaded images
	I0108 21:46:54.204068   56171 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:46:54.204081   56171 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0108 21:46:54.204223   56171 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/config.json ...
	I0108 21:46:54.204251   56171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/config.json: {Name:mkc07c4a6c091856e1f12f0b94aadcb8cdfa66f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:46:54.204403   56171 start.go:365] acquiring machines lock for kubernetes-upgrade-862639: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:46:58.268330   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:01.340367   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:07.420365   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:10.492383   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:16.572331   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:19.644382   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:25.724468   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:28.796429   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:34.876360   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:37.948379   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:44.032412   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:47.100362   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:53.180334   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:56.252433   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:02.332391   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:05.404417   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:11.484381   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:14.556437   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:20.636345   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:23.708388   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:29.788350   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:32.860397   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:38.940390   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:42.012378   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:48.092365   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:51.164386   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:57.244313   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:00.316328   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:06.396368   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:09.468411   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:15.552364   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:18.620336   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:24.704310   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:27.772370   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:33.856394   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:36.924348   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:35:41 UTC, ends at Mon 2024-01-08 21:49:43 UTC. --
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.781273033Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750583781254550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fcbda743-e4a0-41be-a990-41264dd8da84 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.782306525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=628dcd6f-0c8b-489a-a933-4f3fafbebc82 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.782378734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=628dcd6f-0c8b-489a-a933-4f3fafbebc82 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.782552132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749808462229589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bba2dab0d1f65e9624e65ff3ef214aa868c18c8d0712e83d9ebeb64ac9f,PodSandboxId:334b75c1a00d4d6d920842db9bfa3da8a0b38efaad2b6c7871d2adb33a453a5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749788693859250,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc38e19f-713f-4e81-b7e0-b806ad8f0f19,},Annotations:map[string]string{io.kubernetes.container.hash: 3fba03a8,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3,PodSandboxId:52e5447296e744deb69f7b651a7752a2bac43e52606770be924768efeffca3f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749784930380408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-92m44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048c7bfa-ea87-4f91-b002-c30fe11cac2a,},Annotations:map[string]string{io.kubernetes.container.hash: fd42b953,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749777774705984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f,PodSandboxId:4e4ad6f7d8f5543a88c821b53abd8f693e58ab7be107fbd9a05140e9ff88a1ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749777543523868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzxt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
9e4ed5e-f9af-4a21-b744-73f9a3c4deda,},Annotations:map[string]string{io.kubernetes.container.hash: fd01ac29,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a,PodSandboxId:5900c522809bd1557bbd65e0f07f7997c83dbc1c42b37dcce77dcf7f91a075fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749770746616447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2df432b1e578fc196f0bf6361862fb38,},An
notations:map[string]string{io.kubernetes.container.hash: a90bff5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6,PodSandboxId:119fb3452debd70dadff0b2505a4e428e780ec2289632c4278a0650e57c883ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749770689296112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1089ee33750e83e402e7b8e5b66c06e,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd,PodSandboxId:3cfd8f8af2bd6ff31eef083e1f653bf45d3e6e4d9e0c2ac734400b2559587673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749770523852037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32a428b4314fb1783d8979f840a7a9d,},An
notations:map[string]string{io.kubernetes.container.hash: 4edaf228,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2,PodSandboxId:d19e6048643cf4c95c0bd02b29baa8b3e83685bcb68190eed46c1ef5f83a58fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749770249460449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
a6104ed3f583bbf618bcc94d8f8b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=628dcd6f-0c8b-489a-a933-4f3fafbebc82 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.835279264Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=03e614d9-f8d8-46f5-b34b-87818f3a3a66 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.835340301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=03e614d9-f8d8-46f5-b34b-87818f3a3a66 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.836909842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0012c202-08ca-4675-a202-8c37e9df9847 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.837366959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750583837352599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0012c202-08ca-4675-a202-8c37e9df9847 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.838452352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1fe93a29-0311-46cb-b66e-3f9214066398 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.838504263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1fe93a29-0311-46cb-b66e-3f9214066398 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.838711884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749808462229589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bba2dab0d1f65e9624e65ff3ef214aa868c18c8d0712e83d9ebeb64ac9f,PodSandboxId:334b75c1a00d4d6d920842db9bfa3da8a0b38efaad2b6c7871d2adb33a453a5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749788693859250,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc38e19f-713f-4e81-b7e0-b806ad8f0f19,},Annotations:map[string]string{io.kubernetes.container.hash: 3fba03a8,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3,PodSandboxId:52e5447296e744deb69f7b651a7752a2bac43e52606770be924768efeffca3f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749784930380408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-92m44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048c7bfa-ea87-4f91-b002-c30fe11cac2a,},Annotations:map[string]string{io.kubernetes.container.hash: fd42b953,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749777774705984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f,PodSandboxId:4e4ad6f7d8f5543a88c821b53abd8f693e58ab7be107fbd9a05140e9ff88a1ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749777543523868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzxt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
9e4ed5e-f9af-4a21-b744-73f9a3c4deda,},Annotations:map[string]string{io.kubernetes.container.hash: fd01ac29,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a,PodSandboxId:5900c522809bd1557bbd65e0f07f7997c83dbc1c42b37dcce77dcf7f91a075fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749770746616447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2df432b1e578fc196f0bf6361862fb38,},An
notations:map[string]string{io.kubernetes.container.hash: a90bff5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6,PodSandboxId:119fb3452debd70dadff0b2505a4e428e780ec2289632c4278a0650e57c883ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749770689296112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1089ee33750e83e402e7b8e5b66c06e,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd,PodSandboxId:3cfd8f8af2bd6ff31eef083e1f653bf45d3e6e4d9e0c2ac734400b2559587673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749770523852037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32a428b4314fb1783d8979f840a7a9d,},An
notations:map[string]string{io.kubernetes.container.hash: 4edaf228,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2,PodSandboxId:d19e6048643cf4c95c0bd02b29baa8b3e83685bcb68190eed46c1ef5f83a58fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749770249460449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
a6104ed3f583bbf618bcc94d8f8b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1fe93a29-0311-46cb-b66e-3f9214066398 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.882087551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b366216d-2a62-488e-a6c3-96720e72dd38 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.882154788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b366216d-2a62-488e-a6c3-96720e72dd38 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.883589167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=47a5645f-67fc-4bde-a5bf-3e31ba9c183d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.884016126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750583884001736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=47a5645f-67fc-4bde-a5bf-3e31ba9c183d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.884529310Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f7268189-8bed-477e-b4a0-dd270343bc36 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.884579501Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f7268189-8bed-477e-b4a0-dd270343bc36 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.884832972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749808462229589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bba2dab0d1f65e9624e65ff3ef214aa868c18c8d0712e83d9ebeb64ac9f,PodSandboxId:334b75c1a00d4d6d920842db9bfa3da8a0b38efaad2b6c7871d2adb33a453a5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749788693859250,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc38e19f-713f-4e81-b7e0-b806ad8f0f19,},Annotations:map[string]string{io.kubernetes.container.hash: 3fba03a8,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3,PodSandboxId:52e5447296e744deb69f7b651a7752a2bac43e52606770be924768efeffca3f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749784930380408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-92m44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048c7bfa-ea87-4f91-b002-c30fe11cac2a,},Annotations:map[string]string{io.kubernetes.container.hash: fd42b953,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749777774705984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f,PodSandboxId:4e4ad6f7d8f5543a88c821b53abd8f693e58ab7be107fbd9a05140e9ff88a1ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749777543523868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzxt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
9e4ed5e-f9af-4a21-b744-73f9a3c4deda,},Annotations:map[string]string{io.kubernetes.container.hash: fd01ac29,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a,PodSandboxId:5900c522809bd1557bbd65e0f07f7997c83dbc1c42b37dcce77dcf7f91a075fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749770746616447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2df432b1e578fc196f0bf6361862fb38,},An
notations:map[string]string{io.kubernetes.container.hash: a90bff5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6,PodSandboxId:119fb3452debd70dadff0b2505a4e428e780ec2289632c4278a0650e57c883ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749770689296112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1089ee33750e83e402e7b8e5b66c06e,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd,PodSandboxId:3cfd8f8af2bd6ff31eef083e1f653bf45d3e6e4d9e0c2ac734400b2559587673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749770523852037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32a428b4314fb1783d8979f840a7a9d,},An
notations:map[string]string{io.kubernetes.container.hash: 4edaf228,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2,PodSandboxId:d19e6048643cf4c95c0bd02b29baa8b3e83685bcb68190eed46c1ef5f83a58fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749770249460449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
a6104ed3f583bbf618bcc94d8f8b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f7268189-8bed-477e-b4a0-dd270343bc36 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.921409145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=454d169b-ebfe-4384-87f7-1a65555df37c name=/runtime.v1.RuntimeService/Version
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.921470980Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=454d169b-ebfe-4384-87f7-1a65555df37c name=/runtime.v1.RuntimeService/Version
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.923033529Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e07e1fec-e021-492b-ab5e-d5b4354012c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.923696672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750583923674196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e07e1fec-e021-492b-ab5e-d5b4354012c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.924466021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a5d5c30-73fc-45d6-a10f-84cea2904b00 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.924515175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a5d5c30-73fc-45d6-a10f-84cea2904b00 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:49:43 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:49:43.924695598Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749808462229589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bba2dab0d1f65e9624e65ff3ef214aa868c18c8d0712e83d9ebeb64ac9f,PodSandboxId:334b75c1a00d4d6d920842db9bfa3da8a0b38efaad2b6c7871d2adb33a453a5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749788693859250,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc38e19f-713f-4e81-b7e0-b806ad8f0f19,},Annotations:map[string]string{io.kubernetes.container.hash: 3fba03a8,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3,PodSandboxId:52e5447296e744deb69f7b651a7752a2bac43e52606770be924768efeffca3f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749784930380408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-92m44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048c7bfa-ea87-4f91-b002-c30fe11cac2a,},Annotations:map[string]string{io.kubernetes.container.hash: fd42b953,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749777774705984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f,PodSandboxId:4e4ad6f7d8f5543a88c821b53abd8f693e58ab7be107fbd9a05140e9ff88a1ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749777543523868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzxt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
9e4ed5e-f9af-4a21-b744-73f9a3c4deda,},Annotations:map[string]string{io.kubernetes.container.hash: fd01ac29,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a,PodSandboxId:5900c522809bd1557bbd65e0f07f7997c83dbc1c42b37dcce77dcf7f91a075fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749770746616447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2df432b1e578fc196f0bf6361862fb38,},An
notations:map[string]string{io.kubernetes.container.hash: a90bff5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6,PodSandboxId:119fb3452debd70dadff0b2505a4e428e780ec2289632c4278a0650e57c883ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749770689296112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1089ee33750e83e402e7b8e5b66c06e,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd,PodSandboxId:3cfd8f8af2bd6ff31eef083e1f653bf45d3e6e4d9e0c2ac734400b2559587673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749770523852037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32a428b4314fb1783d8979f840a7a9d,},An
notations:map[string]string{io.kubernetes.container.hash: 4edaf228,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2,PodSandboxId:d19e6048643cf4c95c0bd02b29baa8b3e83685bcb68190eed46c1ef5f83a58fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749770249460449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
a6104ed3f583bbf618bcc94d8f8b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a5d5c30-73fc-45d6-a10f-84cea2904b00 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5de4d77203b91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   322cee6dffc36       storage-provisioner
	868a6bba2dab0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   334b75c1a00d4       busybox
	d5beab6237d24       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   52e5447296e74       coredns-5dd5756b68-92m44
	a830809c460f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   322cee6dffc36       storage-provisioner
	6818cfdc588e8       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   4e4ad6f7d8f55       kube-proxy-qzxt5
	079c7966c6797       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   5900c522809bd       etcd-default-k8s-diff-port-690577
	419453feb7e07       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   119fb3452debd       kube-scheduler-default-k8s-diff-port-690577
	c112d2a3f8984       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   3cfd8f8af2bd6       kube-apiserver-default-k8s-diff-port-690577
	14f88651cc075       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   d19e6048643cf       kube-controller-manager-default-k8s-diff-port-690577
	
	
	==> coredns [d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55859 - 8987 "HINFO IN 485812101045147905.9036944099526942375. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014059557s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-690577
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-690577
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=default-k8s-diff-port-690577
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_28_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:28:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-690577
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:49:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:46:57 +0000   Mon, 08 Jan 2024 21:28:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:46:57 +0000   Mon, 08 Jan 2024 21:28:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:46:57 +0000   Mon, 08 Jan 2024 21:28:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:46:57 +0000   Mon, 08 Jan 2024 21:36:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.165
	  Hostname:    default-k8s-diff-port-690577
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ba9bd2360df43d8a78dec72642dfc6f
	  System UUID:                8ba9bd23-60df-43d8-a78d-ec72642dfc6f
	  Boot ID:                    c71f28c0-c58a-4372-b1c1-6bf723d33afd
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-92m44                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-690577                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-690577             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-690577    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-qzxt5                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-690577             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-46dvw                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                21m (x2 over 21m)  kubelet          Node default-k8s-diff-port-690577 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-690577 event: Registered Node default-k8s-diff-port-690577 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-690577 event: Registered Node default-k8s-diff-port-690577 in Controller
	
	
	==> dmesg <==
	[Jan 8 21:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068063] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.463971] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.558644] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.157257] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.606503] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.283716] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.124837] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.150730] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.116626] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.239864] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[Jan 8 21:36] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[ +15.429229] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a] <==
	{"level":"info","ts":"2024-01-08T21:36:13.845943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9a51797c3140749b elected leader 9a51797c3140749b at term 3"}
	{"level":"info","ts":"2024-01-08T21:36:13.848531Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9a51797c3140749b","local-member-attributes":"{Name:default-k8s-diff-port-690577 ClientURLs:[https://192.168.50.165:2379]}","request-path":"/0/members/9a51797c3140749b/attributes","cluster-id":"4efceb46dfe38217","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:36:13.8486Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:36:13.84986Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.165:2379"}
	{"level":"info","ts":"2024-01-08T21:36:13.848612Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:36:13.85064Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:36:13.850728Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:36:13.856299Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-01-08T21:36:21.980574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.012606ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8402464470754930398 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-46dvw.17a87ce4f46105a9\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-46dvw.17a87ce4f46105a9\" value_size:852 lease:8402464470754930282 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-01-08T21:36:21.980905Z","caller":"traceutil/trace.go:171","msg":"trace[1035437584] transaction","detail":"{read_only:false; response_revision:547; number_of_response:1; }","duration":"188.367437ms","start":"2024-01-08T21:36:21.792519Z","end":"2024-01-08T21:36:21.980886Z","steps":["trace[1035437584] 'process raft request'  (duration: 43.472162ms)","trace[1035437584] 'compare'  (duration: 143.732244ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:36:22.237989Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.206451ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8402464470754930399 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-46dvw.17a87ce4f4616fb2\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-46dvw.17a87ce4f4616fb2\" value_size:690 lease:8402464470754930282 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-01-08T21:36:22.238349Z","caller":"traceutil/trace.go:171","msg":"trace[1576829541] transaction","detail":"{read_only:false; response_revision:548; number_of_response:1; }","duration":"248.485318ms","start":"2024-01-08T21:36:21.989822Z","end":"2024-01-08T21:36:22.238307Z","steps":["trace[1576829541] 'process raft request'  (duration: 97.904868ms)","trace[1576829541] 'compare'  (duration: 149.958136ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:36:22.73236Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.149539ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8402464470754930402 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-46dvw.17a87ce5165c4958\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-46dvw.17a87ce5165c4958\" value_size:738 lease:8402464470754930282 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-01-08T21:36:22.732575Z","caller":"traceutil/trace.go:171","msg":"trace[2048237520] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"399.838297ms","start":"2024-01-08T21:36:22.332725Z","end":"2024-01-08T21:36:22.732563Z","steps":["trace[2048237520] 'process raft request'  (duration: 140.437684ms)","trace[2048237520] 'compare'  (duration: 258.982868ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:36:22.732653Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T21:36:22.332709Z","time spent":"399.921359ms","remote":"127.0.0.1:57922","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":833,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-46dvw.17a87ce5165c4958\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-57f55c9bc5-46dvw.17a87ce5165c4958\" value_size:738 lease:8402464470754930282 >> failure:<>"}
	{"level":"warn","ts":"2024-01-08T21:36:22.732831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.17654ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:36:22.7329Z","caller":"traceutil/trace.go:171","msg":"trace[1468010212] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:550; }","duration":"100.256846ms","start":"2024-01-08T21:36:22.632633Z","end":"2024-01-08T21:36:22.732889Z","steps":["trace[1468010212] 'agreement among raft nodes before linearized reading'  (duration: 99.969991ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:36:22.734032Z","caller":"traceutil/trace.go:171","msg":"trace[383999209] transaction","detail":"{read_only:false; response_revision:550; number_of_response:1; }","duration":"400.906921ms","start":"2024-01-08T21:36:22.333116Z","end":"2024-01-08T21:36:22.734023Z","steps":["trace[383999209] 'process raft request'  (duration: 399.371424ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:36:22.734278Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T21:36:22.333105Z","time spent":"401.02351ms","remote":"127.0.0.1:57946","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4066,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-46dvw\" mod_revision:459 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-46dvw\" value_size:4000 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-46dvw\" > >"}
	{"level":"info","ts":"2024-01-08T21:36:22.936317Z","caller":"traceutil/trace.go:171","msg":"trace[772069932] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"189.95676ms","start":"2024-01-08T21:36:22.746342Z","end":"2024-01-08T21:36:22.936299Z","steps":["trace[772069932] 'process raft request'  (duration: 186.642233ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:42:27.29058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.102399ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8402464470754933043 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.165\" mod_revision:876 > success:<request_put:<key:\"/registry/masterleases/192.168.50.165\" value_size:67 lease:8402464470754933041 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.165\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-08T21:42:27.291107Z","caller":"traceutil/trace.go:171","msg":"trace[570870853] transaction","detail":"{read_only:false; response_revision:885; number_of_response:1; }","duration":"200.389987ms","start":"2024-01-08T21:42:27.090683Z","end":"2024-01-08T21:42:27.291073Z","steps":["trace[570870853] 'process raft request'  (duration: 58.541247ms)","trace[570870853] 'compare'  (duration: 140.988523ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:46:13.888429Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":824}
	{"level":"info","ts":"2024-01-08T21:46:13.893472Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":824,"took":"4.472896ms","hash":2214369215}
	{"level":"info","ts":"2024-01-08T21:46:13.900002Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2214369215,"revision":824,"compact-revision":-1}
	
	
	==> kernel <==
	 21:49:44 up 14 min,  0 users,  load average: 0.09, 0.13, 0.11
	Linux default-k8s-diff-port-690577 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd] <==
	I0108 21:46:15.582350       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 21:46:16.582712       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:46:16.582821       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:46:16.582838       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:46:16.582731       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:46:16.583080       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:46:16.584451       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:47:15.444458       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 21:47:16.583437       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:47:16.583651       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:47:16.583686       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:47:16.584687       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:47:16.584827       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:47:16.584839       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:48:15.445275       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 21:49:15.445501       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 21:49:16.583894       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:49:16.584015       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:49:16.584061       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:49:16.585192       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:49:16.585301       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:49:16.585337       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2] <==
	I0108 21:43:59.081299       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:44:28.697675       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:44:29.098690       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:44:58.703542       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:44:59.109714       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:45:28.710115       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:45:29.118274       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:45:58.716597       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:45:59.128610       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:46:28.725209       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:46:29.138834       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:46:58.731879       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:46:59.148280       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 21:47:23.249574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="342.466µs"
	E0108 21:47:28.737571       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:47:29.159001       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 21:47:35.248054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="194.345µs"
	E0108 21:47:58.743122       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:47:59.168106       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:48:28.749239       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:48:29.176720       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:48:58.754601       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:48:59.186219       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:49:28.760699       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:49:29.195833       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f] <==
	I0108 21:36:17.958242       1 server_others.go:69] "Using iptables proxy"
	I0108 21:36:17.974335       1 node.go:141] Successfully retrieved node IP: 192.168.50.165
	I0108 21:36:18.025055       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:36:18.025137       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:36:18.032463       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:36:18.032562       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:36:18.032929       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:36:18.033006       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:18.034054       1 config.go:188] "Starting service config controller"
	I0108 21:36:18.034105       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:36:18.034139       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:36:18.034154       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:36:18.034667       1 config.go:315] "Starting node config controller"
	I0108 21:36:18.034824       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:36:18.134608       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:36:18.134935       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:36:18.135119       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6] <==
	I0108 21:36:13.622221       1 serving.go:348] Generated self-signed cert in-memory
	I0108 21:36:15.661114       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0108 21:36:15.661218       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:15.683108       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0108 21:36:15.683641       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0108 21:36:15.683706       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0108 21:36:15.683889       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 21:36:15.684567       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 21:36:15.684612       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:36:15.684647       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0108 21:36:15.684671       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0108 21:36:15.784368       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0108 21:36:15.784712       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0108 21:36:15.784857       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:35:41 UTC, ends at Mon 2024-01-08 21:49:44 UTC. --
	Jan 08 21:47:09 default-k8s-diff-port-690577 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:47:09 default-k8s-diff-port-690577 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:47:10 default-k8s-diff-port-690577 kubelet[935]: E0108 21:47:10.242582     935 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 08 21:47:10 default-k8s-diff-port-690577 kubelet[935]: E0108 21:47:10.242658     935 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 08 21:47:10 default-k8s-diff-port-690577 kubelet[935]: E0108 21:47:10.242966     935 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-kqmk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-46dvw_kube-system(6c095070-fdfd-4d65-b0b4-b4c234fad85d): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 21:47:10 default-k8s-diff-port-690577 kubelet[935]: E0108 21:47:10.243077     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:47:23 default-k8s-diff-port-690577 kubelet[935]: E0108 21:47:23.229963     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:47:35 default-k8s-diff-port-690577 kubelet[935]: E0108 21:47:35.230866     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:47:48 default-k8s-diff-port-690577 kubelet[935]: E0108 21:47:48.229651     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:48:02 default-k8s-diff-port-690577 kubelet[935]: E0108 21:48:02.229027     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:48:09 default-k8s-diff-port-690577 kubelet[935]: E0108 21:48:09.359355     935 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:48:09 default-k8s-diff-port-690577 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:48:09 default-k8s-diff-port-690577 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:48:09 default-k8s-diff-port-690577 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:48:16 default-k8s-diff-port-690577 kubelet[935]: E0108 21:48:16.229461     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:48:31 default-k8s-diff-port-690577 kubelet[935]: E0108 21:48:31.234830     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:48:45 default-k8s-diff-port-690577 kubelet[935]: E0108 21:48:45.229993     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:49:00 default-k8s-diff-port-690577 kubelet[935]: E0108 21:49:00.229554     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:49:09 default-k8s-diff-port-690577 kubelet[935]: E0108 21:49:09.357163     935 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:49:09 default-k8s-diff-port-690577 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:49:09 default-k8s-diff-port-690577 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:49:09 default-k8s-diff-port-690577 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:49:11 default-k8s-diff-port-690577 kubelet[935]: E0108 21:49:11.230822     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:49:25 default-k8s-diff-port-690577 kubelet[935]: E0108 21:49:25.229819     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:49:40 default-k8s-diff-port-690577 kubelet[935]: E0108 21:49:40.229301     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	
	
	==> storage-provisioner [5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348] <==
	I0108 21:36:48.608985       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:36:48.624087       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:36:48.624243       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:37:06.027608       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:37:06.028064       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-690577_df43ce7a-bee6-4dd1-bdde-80a7cb13df6d!
	I0108 21:37:06.030093       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"221648e3-88aa-4645-a609-fbdc8360324e", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-690577_df43ce7a-bee6-4dd1-bdde-80a7cb13df6d became leader
	I0108 21:37:06.128674       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-690577_df43ce7a-bee6-4dd1-bdde-80a7cb13df6d!
	
	
	==> storage-provisioner [a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4] <==
	I0108 21:36:17.957155       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0108 21:36:47.960972       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-690577 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-46dvw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-690577 describe pod metrics-server-57f55c9bc5-46dvw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-690577 describe pod metrics-server-57f55c9bc5-46dvw: exit status 1 (64.280133ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-46dvw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-690577 describe pod metrics-server-57f55c9bc5-46dvw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-930023 -n embed-certs-930023
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-08 21:50:10.127143803 +0000 UTC m=+6034.052035172
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-930023 -n embed-certs-930023
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-930023 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-930023 logs -n 25: (1.35529522s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-001550                              | cert-expiration-001550       | jenkins | v1.32.0 | 08 Jan 24 21:19 UTC | 08 Jan 24 21:22 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-420119                  | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:21 UTC | 08 Jan 24 21:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-001550                              | cert-expiration-001550       | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	| start   | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p pause-046839                                        | pause-046839                 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-216454 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:22 UTC |
	|         | disable-driver-mounts-216454                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:22 UTC | 08 Jan 24 21:29 UTC |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-930023            | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-690577  | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC |                     |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-930023                 | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC | 08 Jan 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-690577       | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC | 08 Jan 24 21:40 UTC |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-879273                              | old-k8s-version-879273       | jenkins | v1.32.0 | 08 Jan 24 21:41 UTC | 08 Jan 24 21:41 UTC |
	| start   | -p newest-cni-233407 --memory=2200 --alsologtostderr   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:41 UTC | 08 Jan 24 21:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-233407             | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:42 UTC | 08 Jan 24 21:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233407                  | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233407 --memory=2200 --alsologtostderr   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:45 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:46 UTC | 08 Jan 24 21:46 UTC |
	| start   | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:46 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:46:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:46:54.128129   56171 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:46:54.128420   56171 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:46:54.128430   56171 out.go:309] Setting ErrFile to fd 2...
	I0108 21:46:54.128434   56171 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:46:54.128621   56171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:46:54.129203   56171 out.go:303] Setting JSON to false
	I0108 21:46:54.130171   56171 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8938,"bootTime":1704741476,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:46:54.130230   56171 start.go:138] virtualization: kvm guest
	I0108 21:46:54.132720   56171 out.go:177] * [kubernetes-upgrade-862639] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:46:54.134371   56171 notify.go:220] Checking for updates...
	I0108 21:46:54.134382   56171 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:46:54.136001   56171 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:46:54.137642   56171 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:46:54.139511   56171 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:46:54.141231   56171 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:46:54.142658   56171 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:46:54.144661   56171 config.go:182] Loaded profile config "default-k8s-diff-port-690577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:46:54.144813   56171 config.go:182] Loaded profile config "embed-certs-930023": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:46:54.144958   56171 config.go:182] Loaded profile config "newest-cni-233407": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:46:54.145056   56171 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:46:54.182339   56171 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 21:46:54.183934   56171 start.go:298] selected driver: kvm2
	I0108 21:46:54.183952   56171 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:46:54.183967   56171 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:46:54.184676   56171 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:46:54.184780   56171 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:46:54.199679   56171 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:46:54.199728   56171 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 21:46:54.199993   56171 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 21:46:54.200051   56171 cni.go:84] Creating CNI manager for ""
	I0108 21:46:54.200064   56171 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:46:54.200075   56171 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 21:46:54.200081   56171 start_flags.go:323] config:
	{Name:kubernetes-upgrade-862639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-862639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:46:54.200247   56171 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:46:54.202409   56171 out.go:177] * Starting control plane node kubernetes-upgrade-862639 in cluster kubernetes-upgrade-862639
	I0108 21:46:52.192359   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:46:54.203904   56171 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 21:46:54.203953   56171 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0108 21:46:54.203963   56171 cache.go:56] Caching tarball of preloaded images
	I0108 21:46:54.204068   56171 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:46:54.204081   56171 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0108 21:46:54.204223   56171 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/config.json ...
	I0108 21:46:54.204251   56171 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/config.json: {Name:mkc07c4a6c091856e1f12f0b94aadcb8cdfa66f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:46:54.204403   56171 start.go:365] acquiring machines lock for kubernetes-upgrade-862639: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:46:58.268330   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:01.340367   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:07.420365   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:10.492383   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:16.572331   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:19.644382   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:25.724468   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:28.796429   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:34.876360   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:37.948379   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:44.032412   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:47.100362   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:53.180334   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:47:56.252433   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:02.332391   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:05.404417   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:11.484381   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:14.556437   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:20.636345   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:23.708388   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:29.788350   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:32.860397   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:38.940390   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:42.012378   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:48.092365   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:51.164386   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:48:57.244313   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:00.316328   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:06.396368   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:09.468411   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:15.552364   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:18.620336   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:24.704310   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:27.772370   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:33.856394   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:36.924348   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:43.004400   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:46.076407   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:52.156353   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:49:55.228449   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:50:01.308381   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:50:04.380325   55729 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.145:22: connect: no route to host
	I0108 21:50:07.382997   56171 start.go:369] acquired machines lock for "kubernetes-upgrade-862639" in 3m13.178555996s
	I0108 21:50:07.383091   56171 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-862639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-862639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0108 21:50:07.383211   56171 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 21:50:07.385188   56171 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 21:50:07.385373   56171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:50:07.385408   56171 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:50:07.399634   56171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33163
	I0108 21:50:07.400164   56171 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:50:07.400695   56171 main.go:141] libmachine: Using API Version  1
	I0108 21:50:07.400720   56171 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:50:07.401154   56171 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:50:07.401357   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetMachineName
	I0108 21:50:07.401548   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .DriverName
	I0108 21:50:07.401702   56171 start.go:159] libmachine.API.Create for "kubernetes-upgrade-862639" (driver="kvm2")
	I0108 21:50:07.401730   56171 client.go:168] LocalClient.Create starting
	I0108 21:50:07.401769   56171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem
	I0108 21:50:07.401811   56171 main.go:141] libmachine: Decoding PEM data...
	I0108 21:50:07.401829   56171 main.go:141] libmachine: Parsing certificate...
	I0108 21:50:07.401880   56171 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem
	I0108 21:50:07.401910   56171 main.go:141] libmachine: Decoding PEM data...
	I0108 21:50:07.401923   56171 main.go:141] libmachine: Parsing certificate...
	I0108 21:50:07.401940   56171 main.go:141] libmachine: Running pre-create checks...
	I0108 21:50:07.401952   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .PreCreateCheck
	I0108 21:50:07.402331   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetConfigRaw
	I0108 21:50:07.402755   56171 main.go:141] libmachine: Creating machine...
	I0108 21:50:07.402770   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .Create
	I0108 21:50:07.402911   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Creating KVM machine...
	I0108 21:50:07.404222   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found existing default KVM network
	I0108 21:50:07.405419   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | I0108 21:50:07.405218   56766 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:89:5c:4f} reservation:<nil>}
	I0108 21:50:07.406154   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | I0108 21:50:07.406080   56766 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c3:e7:6a} reservation:<nil>}
	I0108 21:50:07.407097   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | I0108 21:50:07.406982   56766 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:b3:a0:9c} reservation:<nil>}
	I0108 21:50:07.408043   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | I0108 21:50:07.407948   56766 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002916a0}
	I0108 21:50:07.414083   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | trying to create private KVM network mk-kubernetes-upgrade-862639 192.168.72.0/24...
	I0108 21:50:07.495676   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | private KVM network mk-kubernetes-upgrade-862639 192.168.72.0/24 created
	I0108 21:50:07.495711   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Setting up store path in /home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639 ...
	I0108 21:50:07.495739   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Building disk image from file:///home/jenkins/minikube-integration/17907-10702/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 21:50:07.495792   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | I0108 21:50:07.495705   56766 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:50:07.495972   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Downloading /home/jenkins/minikube-integration/17907-10702/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17907-10702/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 21:50:07.705007   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | I0108 21:50:07.704836   56766 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639/id_rsa...
	I0108 21:50:07.777820   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | I0108 21:50:07.777675   56766 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639/kubernetes-upgrade-862639.rawdisk...
	I0108 21:50:07.777859   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | Writing magic tar header
	I0108 21:50:07.777881   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | Writing SSH key tar header
	I0108 21:50:07.777907   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | I0108 21:50:07.777819   56766 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639 ...
	I0108 21:50:07.777990   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639
	I0108 21:50:07.778027   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639 (perms=drwx------)
	I0108 21:50:07.778043   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube/machines
	I0108 21:50:07.778061   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:50:07.778075   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17907-10702
	I0108 21:50:07.778092   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 21:50:07.778124   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube/machines (perms=drwxr-xr-x)
	I0108 21:50:07.778138   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | Checking permissions on dir: /home/jenkins
	I0108 21:50:07.778153   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | Checking permissions on dir: /home
	I0108 21:50:07.778168   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | Skipping /home - not owner
	I0108 21:50:07.778185   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702/.minikube (perms=drwxr-xr-x)
	I0108 21:50:07.778198   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Setting executable bit set on /home/jenkins/minikube-integration/17907-10702 (perms=drwxrwxr-x)
	I0108 21:50:07.778214   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 21:50:07.778221   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 21:50:07.778232   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Creating domain...
	I0108 21:50:07.780589   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) define libvirt domain using xml: 
	I0108 21:50:07.780626   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) <domain type='kvm'>
	I0108 21:50:07.780640   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   <name>kubernetes-upgrade-862639</name>
	I0108 21:50:07.780657   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   <memory unit='MiB'>2200</memory>
	I0108 21:50:07.780673   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   <vcpu>2</vcpu>
	I0108 21:50:07.780682   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   <features>
	I0108 21:50:07.780691   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <acpi/>
	I0108 21:50:07.780697   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <apic/>
	I0108 21:50:07.780706   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <pae/>
	I0108 21:50:07.780718   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     
	I0108 21:50:07.780733   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   </features>
	I0108 21:50:07.780749   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   <cpu mode='host-passthrough'>
	I0108 21:50:07.780777   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   
	I0108 21:50:07.780798   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   </cpu>
	I0108 21:50:07.780838   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   <os>
	I0108 21:50:07.780863   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <type>hvm</type>
	I0108 21:50:07.780880   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <boot dev='cdrom'/>
	I0108 21:50:07.780893   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <boot dev='hd'/>
	I0108 21:50:07.780910   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <bootmenu enable='no'/>
	I0108 21:50:07.780919   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   </os>
	I0108 21:50:07.780929   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   <devices>
	I0108 21:50:07.780944   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <disk type='file' device='cdrom'>
	I0108 21:50:07.780967   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <source file='/home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639/boot2docker.iso'/>
	I0108 21:50:07.780978   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <target dev='hdc' bus='scsi'/>
	I0108 21:50:07.780990   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <readonly/>
	I0108 21:50:07.781004   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     </disk>
	I0108 21:50:07.781032   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <disk type='file' device='disk'>
	I0108 21:50:07.781060   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 21:50:07.781095   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <source file='/home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639/kubernetes-upgrade-862639.rawdisk'/>
	I0108 21:50:07.781109   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <target dev='hda' bus='virtio'/>
	I0108 21:50:07.781120   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     </disk>
	I0108 21:50:07.781133   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <interface type='network'>
	I0108 21:50:07.781156   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <source network='mk-kubernetes-upgrade-862639'/>
	I0108 21:50:07.781178   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <model type='virtio'/>
	I0108 21:50:07.781191   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     </interface>
	I0108 21:50:07.781203   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <interface type='network'>
	I0108 21:50:07.781237   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <source network='default'/>
	I0108 21:50:07.781254   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <model type='virtio'/>
	I0108 21:50:07.781264   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     </interface>
	I0108 21:50:07.781275   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <serial type='pty'>
	I0108 21:50:07.781284   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <target port='0'/>
	I0108 21:50:07.781289   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     </serial>
	I0108 21:50:07.781300   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <console type='pty'>
	I0108 21:50:07.781319   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <target type='serial' port='0'/>
	I0108 21:50:07.781344   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     </console>
	I0108 21:50:07.781363   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     <rng model='virtio'>
	I0108 21:50:07.781378   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)       <backend model='random'>/dev/random</backend>
	I0108 21:50:07.781389   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     </rng>
	I0108 21:50:07.781406   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     
	I0108 21:50:07.781419   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)     
	I0108 21:50:07.781432   56171 main.go:141] libmachine: (kubernetes-upgrade-862639)   </devices>
	I0108 21:50:07.781444   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) </domain>
	I0108 21:50:07.781460   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) 
	I0108 21:50:07.786441   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:fe:32:8f in network default
	I0108 21:50:07.787081   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Ensuring networks are active...
	I0108 21:50:07.787100   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:50:07.787903   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Ensuring network default is active
	I0108 21:50:07.788289   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Ensuring network mk-kubernetes-upgrade-862639 is active
	I0108 21:50:07.788764   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Getting domain xml...
	I0108 21:50:07.789490   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Creating domain...
	I0108 21:50:09.122450   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) Waiting to get IP...
	I0108 21:50:09.123226   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:50:09.123724   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | unable to find current IP address of domain kubernetes-upgrade-862639 in network mk-kubernetes-upgrade-862639
	I0108 21:50:09.123752   56171 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | I0108 21:50:09.123662   56766 retry.go:31] will retry after 296.809449ms: waiting for machine to come up
	I0108 21:50:07.381193   55729 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:50:07.381230   55729 main.go:141] libmachine: (newest-cni-233407) Calling .GetSSHHostname
	I0108 21:50:07.382846   55729 machine.go:91] provisioned docker machine in 4m37.395198946s
	I0108 21:50:07.382888   55729 fix.go:56] fixHost completed within 4m37.418682551s
	I0108 21:50:07.382898   55729 start.go:83] releasing machines lock for "newest-cni-233407", held for 4m37.418708817s
	W0108 21:50:07.382923   55729 start.go:694] error starting host: provision: host is not running
	W0108 21:50:07.383022   55729 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0108 21:50:07.383033   55729 start.go:709] Will try again in 5 seconds ...
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:36:02 UTC, ends at Mon 2024-01-08 21:50:11 UTC. --
	Jan 08 21:50:10 embed-certs-930023 crio[727]: time="2024-01-08 21:50:10.939482915Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6565297492e79f3df1c9a4130be7c007460cc922548e6b1a925e21959516e31d,Metadata:&PodSandboxMetadata{Name:busybox,Uid:3ccaabb4-5810-420a-af04-4ea75d328791,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704749807587210212,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ccaabb4-5810-420a-af04-4ea75d328791,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T21:36:39.778780070Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:367f2c30b7577933f36f5a2a6d14047516b4f5fe0b4a76be88be2485ef0ba7d3,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jlpx5,Uid:a3128151-c8ce-44da-a192-3b4a2ae1e3f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704749807498110
002,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jlpx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3128151-c8ce-44da-a192-3b4a2ae1e3f8,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T21:36:39.778781142Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6a72f9936106cfe8989fd53cd20b2b35e987435d645ee19cbbf09aa9f02de3df,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-rj499,Uid:5873675f-8a6c-4404-be01-b46763a62f5c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704749804033003061,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-rj499,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5873675f-8a6c-4404-be01-b46763a62f5c,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T21:36:39.
778776016Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9036c894ad3404e927665c586bee01b9ede100a62ffc307204856217d014b025,Metadata:&PodSandboxMetadata{Name:kube-proxy-8qs2r,Uid:ed301cf2-3f54-4b4c-880b-2fe829c81093,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704749800135269019,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8qs2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed301cf2-3f54-4b4c-880b-2fe829c81093,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-08T21:36:39.778770885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1ef46fa1-8048-4f26-b999-6b78c5450cb8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704749800121599071,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-8048-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-01-08T21:36:39.778778646Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e653836694ad618561fe4fe96d87e01f876dfc37e1929f04c3e83912b9b6f5b5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-930023,Uid:f648c750c1fcf7ff3a889e684ae9738a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704749794386637859,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f648c750c1fcf7ff3a889e684ae9738a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f648c750c1fcf7ff3a889e684ae9738a,kubernetes.io/config.seen: 2024-01-08T21:36:33.780063118Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c6b0d8cddd1cfff0e8612365aba34c5d49a858d8a20ea2a9b341df852d440364,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-930023,Uid:f4caeedcddcde
c781bbb93408f1e0287,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704749794363263160,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4caeedcddcdec781bbb93408f1e0287,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f4caeedcddcdec781bbb93408f1e0287,kubernetes.io/config.seen: 2024-01-08T21:36:33.780064676Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5e121919d98f68d6e6eecf7e6a5f19a99fc772531c8f714e0d96d8ce36262730,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-930023,Uid:76d5e6f3e4eb948f415b8d1bf28546aa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704749794350021305,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d5e6f3e4
eb948f415b8d1bf28546aa,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.142:2379,kubernetes.io/config.hash: 76d5e6f3e4eb948f415b8d1bf28546aa,kubernetes.io/config.seen: 2024-01-08T21:36:33.780054473Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c4af4d7b9cd0ce7d71fbfdc32c9c8676427ff1d2d3a42ab06f07b01bcba93121,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-930023,Uid:25852399f68db47cb85b5f113983dded,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1704749794316598136,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25852399f68db47cb85b5f113983dded,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.142:8443,kubernetes.io/config.hash: 25852399f68db47cb85b5f1139
83dded,kubernetes.io/config.seen: 2024-01-08T21:36:33.780061493Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=3e3c8ffb-2103-49fc-b06a-a58ec3dd7589 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 08 21:50:10 embed-certs-930023 crio[727]: time="2024-01-08 21:50:10.940464571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=66fd6d90-2029-4de6-b59f-0f48ac5e4f4b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:50:10 embed-certs-930023 crio[727]: time="2024-01-08 21:50:10.940523618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=66fd6d90-2029-4de6-b59f-0f48ac5e4f4b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:50:10 embed-certs-930023 crio[727]: time="2024-01-08 21:50:10.940722994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749831065446353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-8048-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95ea6fd0defe771f009bd79c5348511ad75bea05732ff1a2b816bd58eeba1b3d,PodSandboxId:6565297492e79f3df1c9a4130be7c007460cc922548e6b1a925e21959516e31d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749811345129715,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ccaabb4-5810-420a-af04-4ea75d328791,},Annotations:map[string]string{io.kubernetes.container.hash: 25005988,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320,PodSandboxId:367f2c30b7577933f36f5a2a6d14047516b4f5fe0b4a76be88be2485ef0ba7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749808271567935,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jlpx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3128151-c8ce-44da-a192-3b4a2ae1e3f8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b6d9076,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1,PodSandboxId:9036c894ad3404e927665c586bee01b9ede100a62ffc307204856217d014b025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749800890222174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qs2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed301cf2-
3f54-4b4c-880b-2fe829c81093,},Annotations:map[string]string{io.kubernetes.container.hash: 2132f592,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749800681470370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-80
48-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e,PodSandboxId:5e121919d98f68d6e6eecf7e6a5f19a99fc772531c8f714e0d96d8ce36262730,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749795499108360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d5e6f3e4eb948f415b8d1bf28546aa,},Annotations:map[string
]string{io.kubernetes.container.hash: efae481c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b,PodSandboxId:c6b0d8cddd1cfff0e8612365aba34c5d49a858d8a20ea2a9b341df852d440364,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749795230849821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4caeedcddcdec781bbb93408f1e0287,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267,PodSandboxId:c4af4d7b9cd0ce7d71fbfdc32c9c8676427ff1d2d3a42ab06f07b01bcba93121,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749795075583754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25852399f68db47cb85b5f113983dded,},Annotations:map[string]string{io.kubernete
s.container.hash: 27814462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87,PodSandboxId:e653836694ad618561fe4fe96d87e01f876dfc37e1929f04c3e83912b9b6f5b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749794807236128,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f648c750c1fcf7ff3a889e684ae9738a,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=66fd6d90-2029-4de6-b59f-0f48ac5e4f4b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:50:10 embed-certs-930023 crio[727]: time="2024-01-08 21:50:10.968978375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=11c4b088-ffd7-4264-8a7b-aee4541f720b name=/runtime.v1.RuntimeService/Version
	Jan 08 21:50:10 embed-certs-930023 crio[727]: time="2024-01-08 21:50:10.969041304Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=11c4b088-ffd7-4264-8a7b-aee4541f720b name=/runtime.v1.RuntimeService/Version
	Jan 08 21:50:10 embed-certs-930023 crio[727]: time="2024-01-08 21:50:10.970315252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7a40b368-a802-4541-984f-eb02228b4c06 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:50:10 embed-certs-930023 crio[727]: time="2024-01-08 21:50:10.970761932Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750610970744416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=7a40b368-a802-4541-984f-eb02228b4c06 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:50:10 embed-certs-930023 crio[727]: time="2024-01-08 21:50:10.971387584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b6a662a1-602a-4edc-ac7d-8d2983a1ec40 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:50:10 embed-certs-930023 crio[727]: time="2024-01-08 21:50:10.971474538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b6a662a1-602a-4edc-ac7d-8d2983a1ec40 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:50:10 embed-certs-930023 crio[727]: time="2024-01-08 21:50:10.971694792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749831065446353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-8048-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95ea6fd0defe771f009bd79c5348511ad75bea05732ff1a2b816bd58eeba1b3d,PodSandboxId:6565297492e79f3df1c9a4130be7c007460cc922548e6b1a925e21959516e31d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749811345129715,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ccaabb4-5810-420a-af04-4ea75d328791,},Annotations:map[string]string{io.kubernetes.container.hash: 25005988,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320,PodSandboxId:367f2c30b7577933f36f5a2a6d14047516b4f5fe0b4a76be88be2485ef0ba7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749808271567935,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jlpx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3128151-c8ce-44da-a192-3b4a2ae1e3f8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b6d9076,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1,PodSandboxId:9036c894ad3404e927665c586bee01b9ede100a62ffc307204856217d014b025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749800890222174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qs2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed301cf2-
3f54-4b4c-880b-2fe829c81093,},Annotations:map[string]string{io.kubernetes.container.hash: 2132f592,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749800681470370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-80
48-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e,PodSandboxId:5e121919d98f68d6e6eecf7e6a5f19a99fc772531c8f714e0d96d8ce36262730,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749795499108360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d5e6f3e4eb948f415b8d1bf28546aa,},Annotations:map[string
]string{io.kubernetes.container.hash: efae481c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b,PodSandboxId:c6b0d8cddd1cfff0e8612365aba34c5d49a858d8a20ea2a9b341df852d440364,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749795230849821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4caeedcddcdec781bbb93408f1e0287,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267,PodSandboxId:c4af4d7b9cd0ce7d71fbfdc32c9c8676427ff1d2d3a42ab06f07b01bcba93121,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749795075583754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25852399f68db47cb85b5f113983dded,},Annotations:map[string]string{io.kubernete
s.container.hash: 27814462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87,PodSandboxId:e653836694ad618561fe4fe96d87e01f876dfc37e1929f04c3e83912b9b6f5b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749794807236128,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f648c750c1fcf7ff3a889e684ae9738a,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b6a662a1-602a-4edc-ac7d-8d2983a1ec40 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.017240071Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8337330f-8e8f-46cb-bb10-8150ee89a9b8 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.017384355Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8337330f-8e8f-46cb-bb10-8150ee89a9b8 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.019086818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=682be99d-d50a-4aff-b2af-6365ba253047 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.019532883Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750611019517420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=682be99d-d50a-4aff-b2af-6365ba253047 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.021392629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=605f2ac2-829b-401d-af25-9d54dba0232b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.021471553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=605f2ac2-829b-401d-af25-9d54dba0232b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.021721510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749831065446353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-8048-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95ea6fd0defe771f009bd79c5348511ad75bea05732ff1a2b816bd58eeba1b3d,PodSandboxId:6565297492e79f3df1c9a4130be7c007460cc922548e6b1a925e21959516e31d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749811345129715,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ccaabb4-5810-420a-af04-4ea75d328791,},Annotations:map[string]string{io.kubernetes.container.hash: 25005988,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320,PodSandboxId:367f2c30b7577933f36f5a2a6d14047516b4f5fe0b4a76be88be2485ef0ba7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749808271567935,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jlpx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3128151-c8ce-44da-a192-3b4a2ae1e3f8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b6d9076,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1,PodSandboxId:9036c894ad3404e927665c586bee01b9ede100a62ffc307204856217d014b025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749800890222174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qs2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed301cf2-
3f54-4b4c-880b-2fe829c81093,},Annotations:map[string]string{io.kubernetes.container.hash: 2132f592,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749800681470370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-80
48-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e,PodSandboxId:5e121919d98f68d6e6eecf7e6a5f19a99fc772531c8f714e0d96d8ce36262730,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749795499108360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d5e6f3e4eb948f415b8d1bf28546aa,},Annotations:map[string
]string{io.kubernetes.container.hash: efae481c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b,PodSandboxId:c6b0d8cddd1cfff0e8612365aba34c5d49a858d8a20ea2a9b341df852d440364,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749795230849821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4caeedcddcdec781bbb93408f1e0287,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267,PodSandboxId:c4af4d7b9cd0ce7d71fbfdc32c9c8676427ff1d2d3a42ab06f07b01bcba93121,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749795075583754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25852399f68db47cb85b5f113983dded,},Annotations:map[string]string{io.kubernete
s.container.hash: 27814462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87,PodSandboxId:e653836694ad618561fe4fe96d87e01f876dfc37e1929f04c3e83912b9b6f5b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749794807236128,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f648c750c1fcf7ff3a889e684ae9738a,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=605f2ac2-829b-401d-af25-9d54dba0232b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.067671045Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1ebe7359-a82b-4cd9-8c39-d8ff8fddfafe name=/runtime.v1.RuntimeService/Version
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.067754151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1ebe7359-a82b-4cd9-8c39-d8ff8fddfafe name=/runtime.v1.RuntimeService/Version
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.069114904Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b2016446-b223-4d3d-80b2-551834bebaf0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.069509184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750611069495601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b2016446-b223-4d3d-80b2-551834bebaf0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.070309169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b38adbfb-1ca6-4c28-97ff-7e2c28e4d36b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.070377139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b38adbfb-1ca6-4c28-97ff-7e2c28e4d36b name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:50:11 embed-certs-930023 crio[727]: time="2024-01-08 21:50:11.070617984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749831065446353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-8048-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95ea6fd0defe771f009bd79c5348511ad75bea05732ff1a2b816bd58eeba1b3d,PodSandboxId:6565297492e79f3df1c9a4130be7c007460cc922548e6b1a925e21959516e31d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749811345129715,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ccaabb4-5810-420a-af04-4ea75d328791,},Annotations:map[string]string{io.kubernetes.container.hash: 25005988,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320,PodSandboxId:367f2c30b7577933f36f5a2a6d14047516b4f5fe0b4a76be88be2485ef0ba7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749808271567935,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jlpx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3128151-c8ce-44da-a192-3b4a2ae1e3f8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b6d9076,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1,PodSandboxId:9036c894ad3404e927665c586bee01b9ede100a62ffc307204856217d014b025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749800890222174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qs2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed301cf2-
3f54-4b4c-880b-2fe829c81093,},Annotations:map[string]string{io.kubernetes.container.hash: 2132f592,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749800681470370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-80
48-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e,PodSandboxId:5e121919d98f68d6e6eecf7e6a5f19a99fc772531c8f714e0d96d8ce36262730,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749795499108360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d5e6f3e4eb948f415b8d1bf28546aa,},Annotations:map[string
]string{io.kubernetes.container.hash: efae481c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b,PodSandboxId:c6b0d8cddd1cfff0e8612365aba34c5d49a858d8a20ea2a9b341df852d440364,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749795230849821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4caeedcddcdec781bbb93408f1e0287,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267,PodSandboxId:c4af4d7b9cd0ce7d71fbfdc32c9c8676427ff1d2d3a42ab06f07b01bcba93121,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749795075583754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25852399f68db47cb85b5f113983dded,},Annotations:map[string]string{io.kubernete
s.container.hash: 27814462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87,PodSandboxId:e653836694ad618561fe4fe96d87e01f876dfc37e1929f04c3e83912b9b6f5b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749794807236128,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f648c750c1fcf7ff3a889e684ae9738a,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b38adbfb-1ca6-4c28-97ff-7e2c28e4d36b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60dc1219493a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   4fbeb031951ac       storage-provisioner
	95ea6fd0defe7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   6565297492e79       busybox
	040312a16e063       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   367f2c30b7577       coredns-5dd5756b68-jlpx5
	ec5e034aaa19f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   9036c894ad340       kube-proxy-8qs2r
	82b4cf0190ce0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   4fbeb031951ac       storage-provisioner
	07d60f2b2378b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   5e121919d98f6       etcd-embed-certs-930023
	18264b7b5f911       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   c6b0d8cddd1cf       kube-scheduler-embed-certs-930023
	aab0e15e7d8be       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   c4af4d7b9cd0c       kube-apiserver-embed-certs-930023
	3722917aa56b0       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   e653836694ad6       kube-controller-manager-embed-certs-930023
	
	
	==> coredns [040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50305 - 14687 "HINFO IN 594706197751516603.1586272236687783089. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015082507s
	
	
	==> describe nodes <==
	Name:               embed-certs-930023
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-930023
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=embed-certs-930023
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_27_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:27:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-930023
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:50:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:47:23 +0000   Mon, 08 Jan 2024 21:27:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:47:23 +0000   Mon, 08 Jan 2024 21:27:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:47:23 +0000   Mon, 08 Jan 2024 21:27:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:47:23 +0000   Mon, 08 Jan 2024 21:36:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    embed-certs-930023
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2804a5b84d73408e9397b4caab2b5e2d
	  System UUID:                2804a5b8-4d73-408e-9397-b4caab2b5e2d
	  Boot ID:                    cfbceab9-05b0-4b7e-960d-291223a439c9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5dd5756b68-jlpx5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-930023                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-930023             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-930023    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-8qs2r                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-930023             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-rj499               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node embed-certs-930023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node embed-certs-930023 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node embed-certs-930023 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-930023 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-930023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-930023 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                22m                kubelet          Node embed-certs-930023 status is now: NodeReady
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-930023 event: Registered Node embed-certs-930023 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-930023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-930023 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-930023 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-930023 event: Registered Node embed-certs-930023 in Controller
	
	
	==> dmesg <==
	[Jan 8 21:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068858] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.692755] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jan 8 21:36] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.134220] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.608740] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.965119] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.116396] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.145997] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.105066] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.252224] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +18.001318] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[ +14.084638] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e] <==
	{"level":"info","ts":"2024-01-08T21:36:37.157872Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-08T21:36:37.161994Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2024-01-08T21:36:37.162039Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.142:2380"}
	{"level":"info","ts":"2024-01-08T21:36:37.808637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-08T21:36:37.808836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-08T21:36:37.809022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgPreVoteResp from d7a5d3e20a6b0ba7 at term 2"}
	{"level":"info","ts":"2024-01-08T21:36:37.809099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became candidate at term 3"}
	{"level":"info","ts":"2024-01-08T21:36:37.809214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 received MsgVoteResp from d7a5d3e20a6b0ba7 at term 3"}
	{"level":"info","ts":"2024-01-08T21:36:37.809302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7a5d3e20a6b0ba7 became leader at term 3"}
	{"level":"info","ts":"2024-01-08T21:36:37.809362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d7a5d3e20a6b0ba7 elected leader d7a5d3e20a6b0ba7 at term 3"}
	{"level":"info","ts":"2024-01-08T21:36:37.813869Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:36:37.814832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.142:2379"}
	{"level":"info","ts":"2024-01-08T21:36:37.815277Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:36:37.816055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:36:37.813815Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d7a5d3e20a6b0ba7","local-member-attributes":"{Name:embed-certs-930023 ClientURLs:[https://192.168.39.142:2379]}","request-path":"/0/members/d7a5d3e20a6b0ba7/attributes","cluster-id":"f7d6b5428c0c9dc0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:36:37.82207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:36:37.822092Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-01-08T21:42:26.90789Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.985899ms","expected-duration":"100ms","prefix":"","request":"header:<ID:839794796499238848 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-930023\" mod_revision:861 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-930023\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-930023\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-08T21:42:26.908617Z","caller":"traceutil/trace.go:171","msg":"trace[2128994375] transaction","detail":"{read_only:false; response_revision:871; number_of_response:1; }","duration":"295.963181ms","start":"2024-01-08T21:42:26.612616Z","end":"2024-01-08T21:42:26.908579Z","steps":["trace[2128994375] 'process raft request'  (duration: 36.183862ms)","trace[2128994375] 'compare'  (duration: 257.903032ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:42:27.150561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.512658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:42:27.150765Z","caller":"traceutil/trace.go:171","msg":"trace[583155111] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:871; }","duration":"125.717037ms","start":"2024-01-08T21:42:27.025014Z","end":"2024-01-08T21:42:27.150731Z","steps":["trace[583155111] 'range keys from in-memory index tree'  (duration: 125.367851ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:42:28.83914Z","caller":"traceutil/trace.go:171","msg":"trace[306325198] transaction","detail":"{read_only:false; response_revision:872; number_of_response:1; }","duration":"247.335608ms","start":"2024-01-08T21:42:28.591778Z","end":"2024-01-08T21:42:28.839114Z","steps":["trace[306325198] 'process raft request'  (duration: 246.889218ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:46:37.85256Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":829}
	{"level":"info","ts":"2024-01-08T21:46:37.856128Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":829,"took":"3.231177ms","hash":3070780785}
	{"level":"info","ts":"2024-01-08T21:46:37.856198Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3070780785,"revision":829,"compact-revision":-1}
	
	
	==> kernel <==
	 21:50:11 up 14 min,  0 users,  load average: 0.20, 0.21, 0.19
	Linux embed-certs-930023 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267] <==
	I0108 21:46:39.448067       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 21:46:40.448228       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:46:40.448288       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:46:40.448297       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:46:40.448332       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:46:40.448384       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:46:40.449581       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:47:39.293254       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 21:47:40.449037       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:47:40.449227       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:47:40.449266       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:47:40.450269       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:47:40.450315       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:47:40.450321       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:48:39.293695       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 21:49:39.293465       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 21:49:40.449486       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:49:40.449603       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:49:40.449632       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:49:40.450672       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:49:40.450773       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:49:40.450781       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87] <==
	I0108 21:44:22.449306       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:44:51.956080       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:44:52.458035       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:45:21.963235       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:45:22.465856       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:45:51.969346       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:45:52.474735       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:46:21.975351       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:46:22.484507       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:46:51.983680       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:46:52.494998       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:47:21.989279       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:47:22.503065       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 21:47:44.868358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="283.717µs"
	E0108 21:47:51.995557       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:47:52.511293       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 21:47:55.875486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="205.375µs"
	E0108 21:48:22.001201       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:48:22.526022       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:48:52.007458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:48:52.534764       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:49:22.012715       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:49:22.544386       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:49:52.020126       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:49:52.553083       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1] <==
	I0108 21:36:41.178295       1 server_others.go:69] "Using iptables proxy"
	I0108 21:36:41.203426       1 node.go:141] Successfully retrieved node IP: 192.168.39.142
	I0108 21:36:41.281270       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:36:41.281344       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:36:41.285510       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:36:41.285589       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:36:41.285817       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:36:41.285864       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:41.287162       1 config.go:188] "Starting service config controller"
	I0108 21:36:41.287229       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:36:41.287284       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:36:41.287327       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:36:41.288134       1 config.go:315] "Starting node config controller"
	I0108 21:36:41.288176       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:36:41.388118       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:36:41.388236       1 shared_informer.go:318] Caches are synced for node config
	I0108 21:36:41.388289       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b] <==
	I0108 21:36:37.604237       1 serving.go:348] Generated self-signed cert in-memory
	W0108 21:36:39.395532       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 21:36:39.395622       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:36:39.395651       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 21:36:39.395678       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 21:36:39.461001       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0108 21:36:39.461090       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:39.462707       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 21:36:39.462758       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:36:39.463347       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0108 21:36:39.463443       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 21:36:39.563571       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:36:02 UTC, ends at Mon 2024-01-08 21:50:11 UTC. --
	Jan 08 21:47:33 embed-certs-930023 kubelet[932]: E0108 21:47:33.868224     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:47:33 embed-certs-930023 kubelet[932]: E0108 21:47:33.881173     932 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:47:33 embed-certs-930023 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:47:33 embed-certs-930023 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:47:33 embed-certs-930023 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:47:44 embed-certs-930023 kubelet[932]: E0108 21:47:44.851087     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:47:55 embed-certs-930023 kubelet[932]: E0108 21:47:55.851804     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:48:10 embed-certs-930023 kubelet[932]: E0108 21:48:10.850668     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:48:22 embed-certs-930023 kubelet[932]: E0108 21:48:22.851602     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:48:33 embed-certs-930023 kubelet[932]: E0108 21:48:33.869754     932 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:48:33 embed-certs-930023 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:48:33 embed-certs-930023 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:48:33 embed-certs-930023 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:48:37 embed-certs-930023 kubelet[932]: E0108 21:48:37.851269     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:48:48 embed-certs-930023 kubelet[932]: E0108 21:48:48.851369     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:49:01 embed-certs-930023 kubelet[932]: E0108 21:49:01.851569     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:49:14 embed-certs-930023 kubelet[932]: E0108 21:49:14.851026     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:49:29 embed-certs-930023 kubelet[932]: E0108 21:49:29.852234     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:49:33 embed-certs-930023 kubelet[932]: E0108 21:49:33.870333     932 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:49:33 embed-certs-930023 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:49:33 embed-certs-930023 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:49:33 embed-certs-930023 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:49:40 embed-certs-930023 kubelet[932]: E0108 21:49:40.850994     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:49:54 embed-certs-930023 kubelet[932]: E0108 21:49:54.850267     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:50:05 embed-certs-930023 kubelet[932]: E0108 21:50:05.850415     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	
	
	==> storage-provisioner [60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c] <==
	I0108 21:37:11.203372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:37:11.224836       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:37:11.225578       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:37:28.633796       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:37:28.634145       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-930023_ab9dd66f-15b7-4c6d-855b-312e7052f765!
	I0108 21:37:28.635432       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"730647eb-bf8f-4237-87b0-8860cd3b96c5", APIVersion:"v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-930023_ab9dd66f-15b7-4c6d-855b-312e7052f765 became leader
	I0108 21:37:28.735420       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-930023_ab9dd66f-15b7-4c6d-855b-312e7052f765!
	
	
	==> storage-provisioner [82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5] <==
	I0108 21:36:40.872322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0108 21:37:10.874414       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-930023 -n embed-certs-930023
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-930023 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-rj499
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-930023 describe pod metrics-server-57f55c9bc5-rj499
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-930023 describe pod metrics-server-57f55c9bc5-rj499: exit status 1 (69.512454ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rj499" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-930023 describe pod metrics-server-57f55c9bc5-rj499: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (140.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-233407 --alsologtostderr -v=3
E0108 21:44:26.819686   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 21:44:31.714526   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:31.719890   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:31.730229   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:31.750609   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:31.791517   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:31.872082   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:32.032552   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:32.353140   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:32.993414   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:34.274430   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:36.834802   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:41.955331   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:44:52.196514   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p newest-cni-233407 --alsologtostderr -v=3: exit status 82 (2m1.817326275s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-233407"  ...
	* Stopping node "newest-cni-233407"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:42:57.016806   55238 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:42:57.016978   55238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:42:57.016992   55238 out.go:309] Setting ErrFile to fd 2...
	I0108 21:42:57.016999   55238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:42:57.017304   55238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:42:57.017549   55238 out.go:303] Setting JSON to false
	I0108 21:42:57.017636   55238 mustload.go:65] Loading cluster: newest-cni-233407
	I0108 21:42:57.018062   55238 config.go:182] Loaded profile config "newest-cni-233407": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:42:57.018147   55238 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/newest-cni-233407/config.json ...
	I0108 21:42:57.018319   55238 mustload.go:65] Loading cluster: newest-cni-233407
	I0108 21:42:57.018482   55238 config.go:182] Loaded profile config "newest-cni-233407": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:42:57.018525   55238 stop.go:39] StopHost: newest-cni-233407
	I0108 21:42:57.019073   55238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:42:57.019129   55238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:42:57.035993   55238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0108 21:42:57.036483   55238 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:42:57.037100   55238 main.go:141] libmachine: Using API Version  1
	I0108 21:42:57.037128   55238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:42:57.037586   55238 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:42:57.040078   55238 out.go:177] * Stopping node "newest-cni-233407"  ...
	I0108 21:42:57.042360   55238 main.go:141] libmachine: Stopping "newest-cni-233407"...
	I0108 21:42:57.042407   55238 main.go:141] libmachine: (newest-cni-233407) Calling .GetState
	I0108 21:42:57.044771   55238 main.go:141] libmachine: (newest-cni-233407) Calling .Stop
	I0108 21:42:57.048839   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 0/60
	I0108 21:42:58.050746   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 1/60
	I0108 21:42:59.052180   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 2/60
	I0108 21:43:00.053660   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 3/60
	I0108 21:43:01.055279   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 4/60
	I0108 21:43:02.057568   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 5/60
	I0108 21:43:03.059104   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 6/60
	I0108 21:43:04.060624   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 7/60
	I0108 21:43:05.062692   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 8/60
	I0108 21:43:06.064932   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 9/60
	I0108 21:43:07.066644   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 10/60
	I0108 21:43:08.068970   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 11/60
	I0108 21:43:09.070953   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 12/60
	I0108 21:43:10.072627   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 13/60
	I0108 21:43:11.074799   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 14/60
	I0108 21:43:12.076327   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 15/60
	I0108 21:43:13.078877   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 16/60
	I0108 21:43:14.080267   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 17/60
	I0108 21:43:15.082695   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 18/60
	I0108 21:43:16.084215   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 19/60
	I0108 21:43:17.086429   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 20/60
	I0108 21:43:18.087864   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 21/60
	I0108 21:43:19.089941   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 22/60
	I0108 21:43:20.091650   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 23/60
	I0108 21:43:21.093690   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 24/60
	I0108 21:43:22.095896   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 25/60
	I0108 21:43:23.097500   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 26/60
	I0108 21:43:24.099466   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 27/60
	I0108 21:43:25.100829   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 28/60
	I0108 21:43:26.102594   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 29/60
	I0108 21:43:27.104812   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 30/60
	I0108 21:43:28.106203   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 31/60
	I0108 21:43:29.107849   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 32/60
	I0108 21:43:30.109508   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 33/60
	I0108 21:43:31.110951   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 34/60
	I0108 21:43:32.113395   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 35/60
	I0108 21:43:33.115193   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 36/60
	I0108 21:43:34.116706   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 37/60
	I0108 21:43:35.118819   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 38/60
	I0108 21:43:36.121272   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 39/60
	I0108 21:43:37.123606   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 40/60
	I0108 21:43:38.124970   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 41/60
	I0108 21:43:39.126830   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 42/60
	I0108 21:43:40.128177   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 43/60
	I0108 21:43:41.129545   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 44/60
	I0108 21:43:42.131039   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 45/60
	I0108 21:43:43.132466   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 46/60
	I0108 21:43:44.134962   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 47/60
	I0108 21:43:45.137229   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 48/60
	I0108 21:43:46.139150   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 49/60
	I0108 21:43:47.140574   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 50/60
	I0108 21:43:48.141835   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 51/60
	I0108 21:43:49.144034   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 52/60
	I0108 21:43:50.146040   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 53/60
	I0108 21:43:51.148039   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 54/60
	I0108 21:43:52.150162   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 55/60
	I0108 21:43:53.151672   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 56/60
	I0108 21:43:54.153718   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 57/60
	I0108 21:43:55.156179   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 58/60
	I0108 21:43:56.157765   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 59/60
	I0108 21:43:57.158519   55238 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 21:43:57.158576   55238 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:43:57.158593   55238 retry.go:31] will retry after 1.459770416s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:43:58.618936   55238 stop.go:39] StopHost: newest-cni-233407
	I0108 21:43:58.619342   55238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:43:58.619384   55238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:43:58.634922   55238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
	I0108 21:43:58.635324   55238 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:43:58.635775   55238 main.go:141] libmachine: Using API Version  1
	I0108 21:43:58.635796   55238 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:43:58.636122   55238 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:43:58.638227   55238 out.go:177] * Stopping node "newest-cni-233407"  ...
	I0108 21:43:58.639698   55238 main.go:141] libmachine: Stopping "newest-cni-233407"...
	I0108 21:43:58.639719   55238 main.go:141] libmachine: (newest-cni-233407) Calling .GetState
	I0108 21:43:58.641360   55238 main.go:141] libmachine: (newest-cni-233407) Calling .Stop
	I0108 21:43:58.644781   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 0/60
	I0108 21:43:59.647143   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 1/60
	I0108 21:44:00.648977   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 2/60
	I0108 21:44:01.650686   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 3/60
	I0108 21:44:02.652752   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 4/60
	I0108 21:44:03.655011   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 5/60
	I0108 21:44:04.656599   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 6/60
	I0108 21:44:05.658883   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 7/60
	I0108 21:44:06.660900   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 8/60
	I0108 21:44:07.662757   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 9/60
	I0108 21:44:08.664746   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 10/60
	I0108 21:44:09.666870   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 11/60
	I0108 21:44:10.668329   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 12/60
	I0108 21:44:11.671051   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 13/60
	I0108 21:44:12.673231   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 14/60
	I0108 21:44:13.675150   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 15/60
	I0108 21:44:14.676879   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 16/60
	I0108 21:44:15.678747   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 17/60
	I0108 21:44:16.680813   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 18/60
	I0108 21:44:17.682777   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 19/60
	I0108 21:44:18.685125   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 20/60
	I0108 21:44:19.686736   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 21/60
	I0108 21:44:20.688072   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 22/60
	I0108 21:44:21.690168   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 23/60
	I0108 21:44:22.692348   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 24/60
	I0108 21:44:23.693944   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 25/60
	I0108 21:44:24.695579   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 26/60
	I0108 21:44:25.696978   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 27/60
	I0108 21:44:26.698655   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 28/60
	I0108 21:44:27.700418   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 29/60
	I0108 21:44:28.702786   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 30/60
	I0108 21:44:29.704494   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 31/60
	I0108 21:44:30.706995   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 32/60
	I0108 21:44:31.709089   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 33/60
	I0108 21:44:32.710932   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 34/60
	I0108 21:44:33.713126   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 35/60
	I0108 21:44:34.714691   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 36/60
	I0108 21:44:35.716964   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 37/60
	I0108 21:44:36.718519   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 38/60
	I0108 21:44:37.720429   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 39/60
	I0108 21:44:38.722252   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 40/60
	I0108 21:44:39.724320   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 41/60
	I0108 21:44:40.726540   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 42/60
	I0108 21:44:41.727930   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 43/60
	I0108 21:44:42.729652   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 44/60
	I0108 21:44:43.731488   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 45/60
	I0108 21:44:44.732890   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 46/60
	I0108 21:44:45.734742   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 47/60
	I0108 21:44:46.737006   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 48/60
	I0108 21:44:47.738621   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 49/60
	I0108 21:44:48.740874   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 50/60
	I0108 21:44:49.742390   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 51/60
	I0108 21:44:50.743820   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 52/60
	I0108 21:44:51.746279   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 53/60
	I0108 21:44:52.747871   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 54/60
	I0108 21:44:53.750119   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 55/60
	I0108 21:44:54.751833   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 56/60
	I0108 21:44:55.753294   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 57/60
	I0108 21:44:56.754845   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 58/60
	I0108 21:44:57.756800   55238 main.go:141] libmachine: (newest-cni-233407) Waiting for machine to stop 59/60
	I0108 21:44:58.757740   55238 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0108 21:44:58.757782   55238 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0108 21:44:58.759938   55238 out.go:177] 
	W0108 21:44:58.761382   55238 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0108 21:44:58.761402   55238 out.go:239] * 
	* 
	W0108 21:44:58.763793   55238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:44:58.765310   55238 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p newest-cni-233407 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-233407 -n newest-cni-233407
E0108 21:45:12.677225   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-233407 -n newest-cni-233407: exit status 3 (18.608646945s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:45:17.376455   55545 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host
	E0108 21:45:17.376482   55545 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-233407" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (140.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-233407 -n newest-cni-233407
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-233407 -n newest-cni-233407: exit status 3 (3.195321428s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:45:20.572435   55619 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host
	E0108 21:45:20.572454   55619 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-233407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-233407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152816549s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p newest-cni-233407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-233407 -n newest-cni-233407
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-233407 -n newest-cni-233407: exit status 3 (3.063511543s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:45:29.788430   55689 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host
	E0108 21:45:29.788451   55689 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-233407" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (188.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 21:49:48.738894   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:59.399201   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-08 21:52:51.680173618 +0000 UTC m=+6195.605064977
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-690577 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-690577 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.787µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-690577 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-690577 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-690577 logs -n 25: (1.334805272s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p embed-certs-930023                 | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC | 08 Jan 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-690577       | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC | 08 Jan 24 21:40 UTC |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-879273                              | old-k8s-version-879273       | jenkins | v1.32.0 | 08 Jan 24 21:41 UTC | 08 Jan 24 21:41 UTC |
	| start   | -p newest-cni-233407 --memory=2200 --alsologtostderr   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:41 UTC | 08 Jan 24 21:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-233407             | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:42 UTC | 08 Jan 24 21:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233407                  | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233407 --memory=2200 --alsologtostderr   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:45 UTC | 08 Jan 24 21:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:46 UTC | 08 Jan 24 21:46 UTC |
	| start   | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:46 UTC | 08 Jan 24 21:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	| start   | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-233407 image list                           | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	| delete  | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	| start   | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC | 08 Jan 24 21:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC | 08 Jan 24 21:52 UTC |
	| start   | -p auto-458620 --memory=3072                           | auto-458620                  | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC | 08 Jan 24 21:52 UTC |
	| start   | -p kindnet-458620                                      | kindnet-458620               | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:52:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:52:39.073351   58939 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:52:39.073509   58939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:52:39.073520   58939 out.go:309] Setting ErrFile to fd 2...
	I0108 21:52:39.073527   58939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:52:39.073826   58939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:52:39.074615   58939 out.go:303] Setting JSON to false
	I0108 21:52:39.075824   58939 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9283,"bootTime":1704741476,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:52:39.075908   58939 start.go:138] virtualization: kvm guest
	I0108 21:52:39.078732   58939 out.go:177] * [kindnet-458620] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:52:39.080422   58939 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:52:39.080481   58939 notify.go:220] Checking for updates...
	I0108 21:52:39.082072   58939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:52:39.083688   58939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:52:39.085202   58939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:52:39.086739   58939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:52:39.088625   58939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:52:39.090832   58939 config.go:182] Loaded profile config "auto-458620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:52:39.090974   58939 config.go:182] Loaded profile config "default-k8s-diff-port-690577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 21:52:39.091057   58939 config.go:182] Loaded profile config "stopped-upgrade-716145": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 21:52:39.091153   58939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:52:39.131894   58939 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 21:52:39.133502   58939 start.go:298] selected driver: kvm2
	I0108 21:52:39.133522   58939 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:52:39.133535   58939 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:52:39.134628   58939 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:52:39.134735   58939 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:52:39.150613   58939 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:52:39.150699   58939 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 21:52:39.151003   58939 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:52:39.151086   58939 cni.go:84] Creating CNI manager for "kindnet"
	I0108 21:52:39.151107   58939 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 21:52:39.151124   58939 start_flags.go:323] config:
	{Name:kindnet-458620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-458620 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:52:39.151346   58939 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:52:39.153412   58939 out.go:177] * Starting control plane node kindnet-458620 in cluster kindnet-458620
	I0108 21:52:35.343313   58693 main.go:141] libmachine: (auto-458620) DBG | domain auto-458620 has defined MAC address 52:54:00:cd:b6:40 in network mk-auto-458620
	I0108 21:52:35.343793   58693 main.go:141] libmachine: (auto-458620) DBG | unable to find current IP address of domain auto-458620 in network mk-auto-458620
	I0108 21:52:35.343820   58693 main.go:141] libmachine: (auto-458620) DBG | I0108 21:52:35.343755   58726 retry.go:31] will retry after 924.956747ms: waiting for machine to come up
	I0108 21:52:36.270478   58693 main.go:141] libmachine: (auto-458620) DBG | domain auto-458620 has defined MAC address 52:54:00:cd:b6:40 in network mk-auto-458620
	I0108 21:52:36.270997   58693 main.go:141] libmachine: (auto-458620) DBG | unable to find current IP address of domain auto-458620 in network mk-auto-458620
	I0108 21:52:36.271027   58693 main.go:141] libmachine: (auto-458620) DBG | I0108 21:52:36.270930   58726 retry.go:31] will retry after 1.37965203s: waiting for machine to come up
	I0108 21:52:37.652826   58693 main.go:141] libmachine: (auto-458620) DBG | domain auto-458620 has defined MAC address 52:54:00:cd:b6:40 in network mk-auto-458620
	I0108 21:52:37.653421   58693 main.go:141] libmachine: (auto-458620) DBG | unable to find current IP address of domain auto-458620 in network mk-auto-458620
	I0108 21:52:37.653456   58693 main.go:141] libmachine: (auto-458620) DBG | I0108 21:52:37.653366   58726 retry.go:31] will retry after 1.840638484s: waiting for machine to come up
	I0108 21:52:39.155031   58939 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 21:52:39.155081   58939 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 21:52:39.155094   58939 cache.go:56] Caching tarball of preloaded images
	I0108 21:52:39.155208   58939 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:52:39.155227   58939 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 21:52:39.155345   58939 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kindnet-458620/config.json ...
	I0108 21:52:39.155367   58939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kindnet-458620/config.json: {Name:mkb21e32cc24311485e83ddc8ed5ae24099ac117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:52:39.155489   58939 start.go:365] acquiring machines lock for kindnet-458620: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:52:39.496517   58693 main.go:141] libmachine: (auto-458620) DBG | domain auto-458620 has defined MAC address 52:54:00:cd:b6:40 in network mk-auto-458620
	I0108 21:52:39.497061   58693 main.go:141] libmachine: (auto-458620) DBG | unable to find current IP address of domain auto-458620 in network mk-auto-458620
	I0108 21:52:39.497093   58693 main.go:141] libmachine: (auto-458620) DBG | I0108 21:52:39.497013   58726 retry.go:31] will retry after 2.636686157s: waiting for machine to come up
	I0108 21:52:42.137474   58693 main.go:141] libmachine: (auto-458620) DBG | domain auto-458620 has defined MAC address 52:54:00:cd:b6:40 in network mk-auto-458620
	I0108 21:52:42.138025   58693 main.go:141] libmachine: (auto-458620) DBG | unable to find current IP address of domain auto-458620 in network mk-auto-458620
	I0108 21:52:42.138056   58693 main.go:141] libmachine: (auto-458620) DBG | I0108 21:52:42.137974   58726 retry.go:31] will retry after 2.825607609s: waiting for machine to come up
	I0108 21:52:44.965384   58693 main.go:141] libmachine: (auto-458620) DBG | domain auto-458620 has defined MAC address 52:54:00:cd:b6:40 in network mk-auto-458620
	I0108 21:52:44.965858   58693 main.go:141] libmachine: (auto-458620) DBG | unable to find current IP address of domain auto-458620 in network mk-auto-458620
	I0108 21:52:44.965897   58693 main.go:141] libmachine: (auto-458620) DBG | I0108 21:52:44.965802   58726 retry.go:31] will retry after 4.162839658s: waiting for machine to come up
	I0108 21:52:49.130014   58693 main.go:141] libmachine: (auto-458620) DBG | domain auto-458620 has defined MAC address 52:54:00:cd:b6:40 in network mk-auto-458620
	I0108 21:52:49.130477   58693 main.go:141] libmachine: (auto-458620) DBG | unable to find current IP address of domain auto-458620 in network mk-auto-458620
	I0108 21:52:49.130511   58693 main.go:141] libmachine: (auto-458620) DBG | I0108 21:52:49.130426   58726 retry.go:31] will retry after 4.327004063s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:35:41 UTC, ends at Mon 2024-01-08 21:52:52 UTC. --
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.443578434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750772443560023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ef179307-16ef-4d2d-8f42-285b4f95fb70 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.444252019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cd870ebb-726a-4df7-8d9f-54237dadfebf name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.444303775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cd870ebb-726a-4df7-8d9f-54237dadfebf name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.444487590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749808462229589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bba2dab0d1f65e9624e65ff3ef214aa868c18c8d0712e83d9ebeb64ac9f,PodSandboxId:334b75c1a00d4d6d920842db9bfa3da8a0b38efaad2b6c7871d2adb33a453a5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749788693859250,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc38e19f-713f-4e81-b7e0-b806ad8f0f19,},Annotations:map[string]string{io.kubernetes.container.hash: 3fba03a8,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3,PodSandboxId:52e5447296e744deb69f7b651a7752a2bac43e52606770be924768efeffca3f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749784930380408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-92m44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048c7bfa-ea87-4f91-b002-c30fe11cac2a,},Annotations:map[string]string{io.kubernetes.container.hash: fd42b953,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749777774705984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f,PodSandboxId:4e4ad6f7d8f5543a88c821b53abd8f693e58ab7be107fbd9a05140e9ff88a1ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749777543523868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzxt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
9e4ed5e-f9af-4a21-b744-73f9a3c4deda,},Annotations:map[string]string{io.kubernetes.container.hash: fd01ac29,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a,PodSandboxId:5900c522809bd1557bbd65e0f07f7997c83dbc1c42b37dcce77dcf7f91a075fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749770746616447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2df432b1e578fc196f0bf6361862fb38,},An
notations:map[string]string{io.kubernetes.container.hash: a90bff5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6,PodSandboxId:119fb3452debd70dadff0b2505a4e428e780ec2289632c4278a0650e57c883ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749770689296112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1089ee33750e83e402e7b8e5b66c06e,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd,PodSandboxId:3cfd8f8af2bd6ff31eef083e1f653bf45d3e6e4d9e0c2ac734400b2559587673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749770523852037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32a428b4314fb1783d8979f840a7a9d,},An
notations:map[string]string{io.kubernetes.container.hash: 4edaf228,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2,PodSandboxId:d19e6048643cf4c95c0bd02b29baa8b3e83685bcb68190eed46c1ef5f83a58fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749770249460449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
a6104ed3f583bbf618bcc94d8f8b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cd870ebb-726a-4df7-8d9f-54237dadfebf name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.487988805Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f512485a-63f2-4d12-a21a-f6e857e74083 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.488047403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f512485a-63f2-4d12-a21a-f6e857e74083 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.489673464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=019c99a9-20c4-4a69-82a4-ccdd11586381 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.490112675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750772490099313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=019c99a9-20c4-4a69-82a4-ccdd11586381 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.490918434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1ae8396d-20ca-43cb-a4cb-d39edef9093d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.490963661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1ae8396d-20ca-43cb-a4cb-d39edef9093d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.491169578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749808462229589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bba2dab0d1f65e9624e65ff3ef214aa868c18c8d0712e83d9ebeb64ac9f,PodSandboxId:334b75c1a00d4d6d920842db9bfa3da8a0b38efaad2b6c7871d2adb33a453a5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749788693859250,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc38e19f-713f-4e81-b7e0-b806ad8f0f19,},Annotations:map[string]string{io.kubernetes.container.hash: 3fba03a8,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3,PodSandboxId:52e5447296e744deb69f7b651a7752a2bac43e52606770be924768efeffca3f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749784930380408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-92m44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048c7bfa-ea87-4f91-b002-c30fe11cac2a,},Annotations:map[string]string{io.kubernetes.container.hash: fd42b953,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749777774705984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f,PodSandboxId:4e4ad6f7d8f5543a88c821b53abd8f693e58ab7be107fbd9a05140e9ff88a1ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749777543523868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzxt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
9e4ed5e-f9af-4a21-b744-73f9a3c4deda,},Annotations:map[string]string{io.kubernetes.container.hash: fd01ac29,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a,PodSandboxId:5900c522809bd1557bbd65e0f07f7997c83dbc1c42b37dcce77dcf7f91a075fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749770746616447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2df432b1e578fc196f0bf6361862fb38,},An
notations:map[string]string{io.kubernetes.container.hash: a90bff5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6,PodSandboxId:119fb3452debd70dadff0b2505a4e428e780ec2289632c4278a0650e57c883ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749770689296112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1089ee33750e83e402e7b8e5b66c06e,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd,PodSandboxId:3cfd8f8af2bd6ff31eef083e1f653bf45d3e6e4d9e0c2ac734400b2559587673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749770523852037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32a428b4314fb1783d8979f840a7a9d,},An
notations:map[string]string{io.kubernetes.container.hash: 4edaf228,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2,PodSandboxId:d19e6048643cf4c95c0bd02b29baa8b3e83685bcb68190eed46c1ef5f83a58fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749770249460449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
a6104ed3f583bbf618bcc94d8f8b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1ae8396d-20ca-43cb-a4cb-d39edef9093d name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.534241659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4d510863-084b-4337-b48d-2d846f6c6522 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.534303749Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4d510863-084b-4337-b48d-2d846f6c6522 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.535315056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=53d52e74-a35d-4088-a643-59f126a1a5ba name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.535828942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750772535727867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=53d52e74-a35d-4088-a643-59f126a1a5ba name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.536442512Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b81b9381-c1fa-45be-a0d7-9570f1e9c7dd name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.536487506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b81b9381-c1fa-45be-a0d7-9570f1e9c7dd name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.536696491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749808462229589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bba2dab0d1f65e9624e65ff3ef214aa868c18c8d0712e83d9ebeb64ac9f,PodSandboxId:334b75c1a00d4d6d920842db9bfa3da8a0b38efaad2b6c7871d2adb33a453a5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749788693859250,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc38e19f-713f-4e81-b7e0-b806ad8f0f19,},Annotations:map[string]string{io.kubernetes.container.hash: 3fba03a8,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3,PodSandboxId:52e5447296e744deb69f7b651a7752a2bac43e52606770be924768efeffca3f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749784930380408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-92m44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048c7bfa-ea87-4f91-b002-c30fe11cac2a,},Annotations:map[string]string{io.kubernetes.container.hash: fd42b953,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749777774705984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f,PodSandboxId:4e4ad6f7d8f5543a88c821b53abd8f693e58ab7be107fbd9a05140e9ff88a1ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749777543523868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzxt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
9e4ed5e-f9af-4a21-b744-73f9a3c4deda,},Annotations:map[string]string{io.kubernetes.container.hash: fd01ac29,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a,PodSandboxId:5900c522809bd1557bbd65e0f07f7997c83dbc1c42b37dcce77dcf7f91a075fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749770746616447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2df432b1e578fc196f0bf6361862fb38,},An
notations:map[string]string{io.kubernetes.container.hash: a90bff5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6,PodSandboxId:119fb3452debd70dadff0b2505a4e428e780ec2289632c4278a0650e57c883ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749770689296112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1089ee33750e83e402e7b8e5b66c06e,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd,PodSandboxId:3cfd8f8af2bd6ff31eef083e1f653bf45d3e6e4d9e0c2ac734400b2559587673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749770523852037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32a428b4314fb1783d8979f840a7a9d,},An
notations:map[string]string{io.kubernetes.container.hash: 4edaf228,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2,PodSandboxId:d19e6048643cf4c95c0bd02b29baa8b3e83685bcb68190eed46c1ef5f83a58fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749770249460449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
a6104ed3f583bbf618bcc94d8f8b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b81b9381-c1fa-45be-a0d7-9570f1e9c7dd name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.576257893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7092f059-e642-4b5d-a6bb-f5da02869981 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.576330692Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7092f059-e642-4b5d-a6bb-f5da02869981 name=/runtime.v1.RuntimeService/Version
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.579634173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=30b78853-59b5-44ee-bb30-b5792a44fba7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.580188528Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750772580170838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=30b78853-59b5-44ee-bb30-b5792a44fba7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.582106565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5742eda2-4642-4588-9802-3c32744502c6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.582306642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5742eda2-4642-4588-9802-3c32744502c6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:52 default-k8s-diff-port-690577 crio[726]: time="2024-01-08 21:52:52.582545509Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749808462229589,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:868a6bba2dab0d1f65e9624e65ff3ef214aa868c18c8d0712e83d9ebeb64ac9f,PodSandboxId:334b75c1a00d4d6d920842db9bfa3da8a0b38efaad2b6c7871d2adb33a453a5f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749788693859250,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bc38e19f-713f-4e81-b7e0-b806ad8f0f19,},Annotations:map[string]string{io.kubernetes.container.hash: 3fba03a8,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3,PodSandboxId:52e5447296e744deb69f7b651a7752a2bac43e52606770be924768efeffca3f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749784930380408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-92m44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 048c7bfa-ea87-4f91-b002-c30fe11cac2a,},Annotations:map[string]string{io.kubernetes.container.hash: fd42b953,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4,PodSandboxId:322cee6dffc36dcc11592e3fd349cc747fc306afa9db7a4b9720077e397e1e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749777774705984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 69c923fb-6414-4802-9420-c02694250e2d,},Annotations:map[string]string{io.kubernetes.container.hash: daeced5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f,PodSandboxId:4e4ad6f7d8f5543a88c821b53abd8f693e58ab7be107fbd9a05140e9ff88a1ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749777543523868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qzxt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
9e4ed5e-f9af-4a21-b744-73f9a3c4deda,},Annotations:map[string]string{io.kubernetes.container.hash: fd01ac29,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a,PodSandboxId:5900c522809bd1557bbd65e0f07f7997c83dbc1c42b37dcce77dcf7f91a075fb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749770746616447,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2df432b1e578fc196f0bf6361862fb38,},An
notations:map[string]string{io.kubernetes.container.hash: a90bff5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6,PodSandboxId:119fb3452debd70dadff0b2505a4e428e780ec2289632c4278a0650e57c883ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749770689296112,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1089ee33750e83e402e7b8e5b66c06e,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd,PodSandboxId:3cfd8f8af2bd6ff31eef083e1f653bf45d3e6e4d9e0c2ac734400b2559587673,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749770523852037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32a428b4314fb1783d8979f840a7a9d,},An
notations:map[string]string{io.kubernetes.container.hash: 4edaf228,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2,PodSandboxId:d19e6048643cf4c95c0bd02b29baa8b3e83685bcb68190eed46c1ef5f83a58fa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749770249460449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-690577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
a6104ed3f583bbf618bcc94d8f8b7b7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5742eda2-4642-4588-9802-3c32744502c6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5de4d77203b91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Running             storage-provisioner       2                   322cee6dffc36       storage-provisioner
	868a6bba2dab0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   16 minutes ago      Running             busybox                   1                   334b75c1a00d4       busybox
	d5beab6237d24       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Running             coredns                   1                   52e5447296e74       coredns-5dd5756b68-92m44
	a830809c460f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Exited              storage-provisioner       1                   322cee6dffc36       storage-provisioner
	6818cfdc588e8       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      16 minutes ago      Running             kube-proxy                1                   4e4ad6f7d8f55       kube-proxy-qzxt5
	079c7966c6797       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      16 minutes ago      Running             etcd                      1                   5900c522809bd       etcd-default-k8s-diff-port-690577
	419453feb7e07       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      16 minutes ago      Running             kube-scheduler            1                   119fb3452debd       kube-scheduler-default-k8s-diff-port-690577
	c112d2a3f8984       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      16 minutes ago      Running             kube-apiserver            1                   3cfd8f8af2bd6       kube-apiserver-default-k8s-diff-port-690577
	14f88651cc075       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      16 minutes ago      Running             kube-controller-manager   1                   d19e6048643cf       kube-controller-manager-default-k8s-diff-port-690577
	
	
	==> coredns [d5beab6237d240f93214add1aeeade7a2f92bd13264fd5ba92ee48d50d0448c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55859 - 8987 "HINFO IN 485812101045147905.9036944099526942375. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014059557s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-690577
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-690577
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=default-k8s-diff-port-690577
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_28_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:28:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-690577
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:52:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:52:03 +0000   Mon, 08 Jan 2024 21:28:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:52:03 +0000   Mon, 08 Jan 2024 21:28:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:52:03 +0000   Mon, 08 Jan 2024 21:28:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:52:03 +0000   Mon, 08 Jan 2024 21:36:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.165
	  Hostname:    default-k8s-diff-port-690577
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ba9bd2360df43d8a78dec72642dfc6f
	  System UUID:                8ba9bd23-60df-43d8-a78d-ec72642dfc6f
	  Boot ID:                    c71f28c0-c58a-4372-b1c1-6bf723d33afd
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-92m44                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 etcd-default-k8s-diff-port-690577                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kube-apiserver-default-k8s-diff-port-690577             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-690577    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-qzxt5                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-default-k8s-diff-port-690577             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 metrics-server-57f55c9bc5-46dvw                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientPID     24m                kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24m                kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m                kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                24m (x2 over 24m)  kubelet          Node default-k8s-diff-port-690577 status is now: NodeReady
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           24m                node-controller  Node default-k8s-diff-port-690577 event: Registered Node default-k8s-diff-port-690577 in Controller
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-690577 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-690577 event: Registered Node default-k8s-diff-port-690577 in Controller
	
	
	==> dmesg <==
	[Jan 8 21:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068063] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.463971] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.558644] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.157257] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.606503] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.283716] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.124837] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.150730] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.116626] systemd-fstab-generator[686]: Ignoring "noauto" for root device
	[  +0.239864] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[Jan 8 21:36] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[ +15.429229] kauditd_printk_skb: 21 callbacks suppressed
	[Jan 8 21:52] hrtimer: interrupt took 3446022 ns
	
	
	==> etcd [079c7966c6797c63f7cefd5dee91ff385dcb810e98c30a1a80893906abee178a] <==
	{"level":"info","ts":"2024-01-08T21:51:07.646307Z","caller":"traceutil/trace.go:171","msg":"trace[1158890072] linearizableReadLoop","detail":"{readStateIndex:1524; appliedIndex:1523; }","duration":"214.530729ms","start":"2024-01-08T21:51:07.431723Z","end":"2024-01-08T21:51:07.646254Z","steps":["trace[1158890072] 'read index received'  (duration: 214.274929ms)","trace[1158890072] 'applied index is now lower than readState.Index'  (duration: 255.315µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:51:07.646687Z","caller":"traceutil/trace.go:171","msg":"trace[1612185828] transaction","detail":"{read_only:false; response_revision:1304; number_of_response:1; }","duration":"244.396859ms","start":"2024-01-08T21:51:07.402235Z","end":"2024-01-08T21:51:07.646632Z","steps":["trace[1612185828] 'process raft request'  (duration: 243.803543ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:51:07.646938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.865081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-01-08T21:51:07.6476Z","caller":"traceutil/trace.go:171","msg":"trace[211805775] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1304; }","duration":"215.899424ms","start":"2024-01-08T21:51:07.431689Z","end":"2024-01-08T21:51:07.647588Z","steps":["trace[211805775] 'agreement among raft nodes before linearized reading'  (duration: 214.847894ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:51:13.901834Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1066}
	{"level":"info","ts":"2024-01-08T21:51:13.90334Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1066,"took":"1.234705ms","hash":1883999114}
	{"level":"info","ts":"2024-01-08T21:51:13.903415Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1883999114,"revision":1066,"compact-revision":824}
	{"level":"info","ts":"2024-01-08T21:51:56.151111Z","caller":"traceutil/trace.go:171","msg":"trace[885017364] linearizableReadLoop","detail":"{readStateIndex:1573; appliedIndex:1572; }","duration":"479.556136ms","start":"2024-01-08T21:51:55.671519Z","end":"2024-01-08T21:51:56.151075Z","steps":["trace[885017364] 'read index received'  (duration: 479.157797ms)","trace[885017364] 'applied index is now lower than readState.Index'  (duration: 391.035µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:51:56.151415Z","caller":"traceutil/trace.go:171","msg":"trace[67863084] transaction","detail":"{read_only:false; response_revision:1342; number_of_response:1; }","duration":"591.089651ms","start":"2024-01-08T21:51:55.56031Z","end":"2024-01-08T21:51:56.1514Z","steps":["trace[67863084] 'process raft request'  (duration: 590.403556ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:51:56.151516Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.524776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-01-08T21:51:56.151612Z","caller":"traceutil/trace.go:171","msg":"trace[586748115] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1342; }","duration":"174.639833ms","start":"2024-01-08T21:51:55.976954Z","end":"2024-01-08T21:51:56.151594Z","steps":["trace[586748115] 'agreement among raft nodes before linearized reading'  (duration: 174.484602ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:51:56.151717Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T21:51:55.560274Z","time spent":"591.20225ms","remote":"127.0.0.1:57966","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-690577\" mod_revision:1334 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-690577\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-690577\" > >"}
	{"level":"warn","ts":"2024-01-08T21:51:56.152165Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"480.668262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:51:56.152354Z","caller":"traceutil/trace.go:171","msg":"trace[891433532] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1342; }","duration":"480.857078ms","start":"2024-01-08T21:51:55.671487Z","end":"2024-01-08T21:51:56.152344Z","steps":["trace[891433532] 'agreement among raft nodes before linearized reading'  (duration: 480.658009ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:51:56.152427Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T21:51:55.671465Z","time spent":"480.951101ms","remote":"127.0.0.1:57900","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-01-08T21:51:56.721323Z","caller":"traceutil/trace.go:171","msg":"trace[1529392897] linearizableReadLoop","detail":"{readStateIndex:1574; appliedIndex:1573; }","duration":"523.077313ms","start":"2024-01-08T21:51:56.198231Z","end":"2024-01-08T21:51:56.721308Z","steps":["trace[1529392897] 'read index received'  (duration: 496.286294ms)","trace[1529392897] 'applied index is now lower than readState.Index'  (duration: 26.790145ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:51:56.721572Z","caller":"traceutil/trace.go:171","msg":"trace[1304535135] transaction","detail":"{read_only:false; response_revision:1343; number_of_response:1; }","duration":"562.435646ms","start":"2024-01-08T21:51:56.159126Z","end":"2024-01-08T21:51:56.721562Z","steps":["trace[1304535135] 'process raft request'  (duration: 535.550652ms)","trace[1304535135] 'compare'  (duration: 26.362539ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:51:56.721687Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T21:51:56.159111Z","time spent":"562.523475ms","remote":"127.0.0.1:57942","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1341 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-01-08T21:51:56.721963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"523.749827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:51:56.722021Z","caller":"traceutil/trace.go:171","msg":"trace[1046089194] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; response_count:0; response_revision:1344; }","duration":"523.806674ms","start":"2024-01-08T21:51:56.1982Z","end":"2024-01-08T21:51:56.722007Z","steps":["trace[1046089194] 'agreement among raft nodes before linearized reading'  (duration: 523.676954ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:51:56.722044Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T21:51:56.198183Z","time spent":"523.854003ms","remote":"127.0.0.1:57936","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":28,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" count_only:true "}
	{"level":"info","ts":"2024-01-08T21:51:56.721578Z","caller":"traceutil/trace.go:171","msg":"trace[938469787] transaction","detail":"{read_only:false; response_revision:1344; number_of_response:1; }","duration":"298.497184ms","start":"2024-01-08T21:51:56.423058Z","end":"2024-01-08T21:51:56.721555Z","steps":["trace[938469787] 'process raft request'  (duration: 298.418324ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:51:57.229353Z","caller":"traceutil/trace.go:171","msg":"trace[110424601] transaction","detail":"{read_only:false; response_revision:1345; number_of_response:1; }","duration":"145.330788ms","start":"2024-01-08T21:51:57.084Z","end":"2024-01-08T21:51:57.229331Z","steps":["trace[110424601] 'process raft request'  (duration: 94.221806ms)","trace[110424601] 'compare'  (duration: 50.916242ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T21:52:23.089892Z","caller":"traceutil/trace.go:171","msg":"trace[1416688389] transaction","detail":"{read_only:false; response_revision:1365; number_of_response:1; }","duration":"116.818572ms","start":"2024-01-08T21:52:22.973058Z","end":"2024-01-08T21:52:23.089876Z","steps":["trace[1416688389] 'process raft request'  (duration: 116.403503ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:52:35.490618Z","caller":"traceutil/trace.go:171","msg":"trace[1861831284] transaction","detail":"{read_only:false; response_revision:1375; number_of_response:1; }","duration":"205.411376ms","start":"2024-01-08T21:52:35.285176Z","end":"2024-01-08T21:52:35.490587Z","steps":["trace[1861831284] 'process raft request'  (duration: 205.221925ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:52:52 up 17 min,  0 users,  load average: 0.14, 0.13, 0.10
	Linux default-k8s-diff-port-690577 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [c112d2a3f898488c1a61d845db303c39d1167e4474123a94c6e09ba5fab948bd] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:51:15.590509       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 21:51:16.591289       1 handler_proxy.go:93] no RequestInfo found in the context
	W0108 21:51:16.591391       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:51:16.591449       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:51:16.591504       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0108 21:51:16.591574       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:51:16.592848       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:51:56.152515       1 trace.go:236] Trace[1246152899]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c66f1ab2-46d9-4401-ad45-0f2f606d8602,client:192.168.50.165,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/default-k8s-diff-port-690577,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (08-Jan-2024 21:51:55.558) (total time: 594ms):
	Trace[1246152899]: ["GuaranteedUpdate etcd3" audit-id:c66f1ab2-46d9-4401-ad45-0f2f606d8602,key:/leases/kube-node-lease/default-k8s-diff-port-690577,type:*coordination.Lease,resource:leases.coordination.k8s.io 593ms (21:51:55.558)
	Trace[1246152899]:  ---"Txn call completed" 592ms (21:51:56.152)]
	Trace[1246152899]: [594.050893ms] [594.050893ms] END
	I0108 21:51:56.723223       1 trace.go:236] Trace[491382743]: "Update" accept:application/json, */*,audit-id:8baeb5ea-b2f7-474f-bc72-1db1328283e9,client:192.168.50.165,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (08-Jan-2024 21:51:56.157) (total time: 565ms):
	Trace[491382743]: ["GuaranteedUpdate etcd3" audit-id:8baeb5ea-b2f7-474f-bc72-1db1328283e9,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 565ms (21:51:56.157)
	Trace[491382743]:  ---"Txn call completed" 564ms (21:51:56.723)]
	Trace[491382743]: [565.790429ms] [565.790429ms] END
	I0108 21:52:15.444543       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 21:52:16.592206       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:52:16.592280       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:52:16.592297       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:52:16.593638       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:52:16.593891       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:52:16.593969       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [14f88651cc0758f56bcd2ced50580427cd6c75f47b0804456c8de7c4d31b4be2] <==
	E0108 21:47:28.737571       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:47:29.159001       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 21:47:35.248054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="194.345µs"
	E0108 21:47:58.743122       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:47:59.168106       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:48:28.749239       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:48:29.176720       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:48:58.754601       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:48:59.186219       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:49:28.760699       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:49:29.195833       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:49:58.766194       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:49:59.203856       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:50:28.773453       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:50:29.213548       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:50:58.780628       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:50:59.223885       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:51:28.789633       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:51:29.238358       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:51:58.804915       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:51:59.251120       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 21:52:27.267186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="638.961µs"
	E0108 21:52:28.813512       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:52:29.269419       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 21:52:38.250571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="105.917µs"
	
	
	==> kube-proxy [6818cfdc588e890433727965dd65ad05b5f7a73520757ab03578ff3ce09e8c8f] <==
	I0108 21:36:17.958242       1 server_others.go:69] "Using iptables proxy"
	I0108 21:36:17.974335       1 node.go:141] Successfully retrieved node IP: 192.168.50.165
	I0108 21:36:18.025055       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:36:18.025137       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:36:18.032463       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:36:18.032562       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:36:18.032929       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:36:18.033006       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:18.034054       1 config.go:188] "Starting service config controller"
	I0108 21:36:18.034105       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:36:18.034139       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:36:18.034154       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:36:18.034667       1 config.go:315] "Starting node config controller"
	I0108 21:36:18.034824       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:36:18.134608       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 21:36:18.134935       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:36:18.135119       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [419453feb7e0799e4024b13dc876bf4b63ba01803427ce79522c7d6881e54ff6] <==
	I0108 21:36:13.622221       1 serving.go:348] Generated self-signed cert in-memory
	I0108 21:36:15.661114       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0108 21:36:15.661218       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:15.683108       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0108 21:36:15.683641       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0108 21:36:15.683706       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0108 21:36:15.683889       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 21:36:15.684567       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 21:36:15.684612       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:36:15.684647       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0108 21:36:15.684671       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0108 21:36:15.784368       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0108 21:36:15.784712       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0108 21:36:15.784857       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:35:41 UTC, ends at Mon 2024-01-08 21:52:53 UTC. --
	Jan 08 21:50:20 default-k8s-diff-port-690577 kubelet[935]: E0108 21:50:20.230150     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:50:34 default-k8s-diff-port-690577 kubelet[935]: E0108 21:50:34.229248     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:50:46 default-k8s-diff-port-690577 kubelet[935]: E0108 21:50:46.229436     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:50:57 default-k8s-diff-port-690577 kubelet[935]: E0108 21:50:57.232122     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:51:09 default-k8s-diff-port-690577 kubelet[935]: E0108 21:51:09.231390     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:51:09 default-k8s-diff-port-690577 kubelet[935]: E0108 21:51:09.232197     935 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 08 21:51:09 default-k8s-diff-port-690577 kubelet[935]: E0108 21:51:09.357274     935 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:51:09 default-k8s-diff-port-690577 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:51:09 default-k8s-diff-port-690577 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:51:09 default-k8s-diff-port-690577 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:51:22 default-k8s-diff-port-690577 kubelet[935]: E0108 21:51:22.229277     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:51:37 default-k8s-diff-port-690577 kubelet[935]: E0108 21:51:37.230640     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:51:50 default-k8s-diff-port-690577 kubelet[935]: E0108 21:51:50.230511     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:52:04 default-k8s-diff-port-690577 kubelet[935]: E0108 21:52:04.230523     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:52:09 default-k8s-diff-port-690577 kubelet[935]: E0108 21:52:09.360214     935 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:52:09 default-k8s-diff-port-690577 kubelet[935]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:52:09 default-k8s-diff-port-690577 kubelet[935]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:52:09 default-k8s-diff-port-690577 kubelet[935]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:52:15 default-k8s-diff-port-690577 kubelet[935]: E0108 21:52:15.247826     935 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 08 21:52:15 default-k8s-diff-port-690577 kubelet[935]: E0108 21:52:15.247933     935 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 08 21:52:15 default-k8s-diff-port-690577 kubelet[935]: E0108 21:52:15.248158     935 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-kqmk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-46dvw_kube-system(6c095070-fdfd-4d65-b0b4-b4c234fad85d): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 08 21:52:15 default-k8s-diff-port-690577 kubelet[935]: E0108 21:52:15.248196     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:52:27 default-k8s-diff-port-690577 kubelet[935]: E0108 21:52:27.234479     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:52:38 default-k8s-diff-port-690577 kubelet[935]: E0108 21:52:38.230458     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	Jan 08 21:52:50 default-k8s-diff-port-690577 kubelet[935]: E0108 21:52:50.230094     935 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-46dvw" podUID="6c095070-fdfd-4d65-b0b4-b4c234fad85d"
	
	
	==> storage-provisioner [5de4d77203b91627ace7d8bd266f1a77fe0a54de98d5ad0eff602ceb462d3348] <==
	I0108 21:36:48.608985       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:36:48.624087       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:36:48.624243       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:37:06.027608       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:37:06.028064       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-690577_df43ce7a-bee6-4dd1-bdde-80a7cb13df6d!
	I0108 21:37:06.030093       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"221648e3-88aa-4645-a609-fbdc8360324e", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-690577_df43ce7a-bee6-4dd1-bdde-80a7cb13df6d became leader
	I0108 21:37:06.128674       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-690577_df43ce7a-bee6-4dd1-bdde-80a7cb13df6d!
	
	
	==> storage-provisioner [a830809c460f40c782fdcd01c642a4e69e9496eca8029363ce62db5ff6d28ec4] <==
	I0108 21:36:17.957155       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0108 21:36:47.960972       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-690577 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-46dvw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-690577 describe pod metrics-server-57f55c9bc5-46dvw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-690577 describe pod metrics-server-57f55c9bc5-46dvw: exit status 1 (75.880249ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-46dvw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-690577 describe pod metrics-server-57f55c9bc5-46dvw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (188.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (135.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 21:50:29.700004   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:50:36.430121   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 21:51:04.516845   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-930023 -n embed-certs-930023
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-08 21:52:25.055358458 +0000 UTC m=+6168.980249818
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-930023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-930023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.921µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-930023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-930023 -n embed-certs-930023
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-930023 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-930023 logs -n 25: (2.148414432s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-930023            | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC | 08 Jan 24 21:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-690577  | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC | 08 Jan 24 21:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:29 UTC |                     |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-930023                 | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-930023                                  | embed-certs-930023           | jenkins | v1.32.0 | 08 Jan 24 21:30 UTC | 08 Jan 24 21:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-690577       | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-690577 | jenkins | v1.32.0 | 08 Jan 24 21:31 UTC | 08 Jan 24 21:40 UTC |
	|         | default-k8s-diff-port-690577                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-879273                              | old-k8s-version-879273       | jenkins | v1.32.0 | 08 Jan 24 21:41 UTC | 08 Jan 24 21:41 UTC |
	| start   | -p newest-cni-233407 --memory=2200 --alsologtostderr   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:41 UTC | 08 Jan 24 21:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-233407             | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:42 UTC | 08 Jan 24 21:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-233407                  | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-233407 --memory=2200 --alsologtostderr   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:45 UTC | 08 Jan 24 21:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-420119                                   | no-preload-420119            | jenkins | v1.32.0 | 08 Jan 24 21:46 UTC | 08 Jan 24 21:46 UTC |
	| start   | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:46 UTC | 08 Jan 24 21:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	| start   | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-233407 image list                           | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	| delete  | -p newest-cni-233407                                   | newest-cni-233407            | jenkins | v1.32.0 | 08 Jan 24 21:51 UTC | 08 Jan 24 21:51 UTC |
	| start   | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-862639                           | kubernetes-upgrade-862639    | jenkins | v1.32.0 | 08 Jan 24 21:52 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:52:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:52:07.692979   58333 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:52:07.693134   58333 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:52:07.693146   58333 out.go:309] Setting ErrFile to fd 2...
	I0108 21:52:07.693153   58333 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:52:07.693451   58333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:52:07.694187   58333 out.go:303] Setting JSON to false
	I0108 21:52:07.695439   58333 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9252,"bootTime":1704741476,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:52:07.695525   58333 start.go:138] virtualization: kvm guest
	I0108 21:52:07.697898   58333 out.go:177] * [kubernetes-upgrade-862639] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:52:07.699471   58333 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:52:07.699558   58333 notify.go:220] Checking for updates...
	I0108 21:52:07.701323   58333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:52:07.703059   58333 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:52:07.704722   58333 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:52:07.706192   58333 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:52:07.707644   58333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:52:07.709600   58333 config.go:182] Loaded profile config "kubernetes-upgrade-862639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:52:07.710085   58333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:52:07.710134   58333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:52:07.728789   58333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0108 21:52:07.729248   58333 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:52:07.729908   58333 main.go:141] libmachine: Using API Version  1
	I0108 21:52:07.729943   58333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:52:07.730336   58333 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:52:07.730509   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .DriverName
	I0108 21:52:07.730755   58333 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:52:07.731174   58333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:52:07.731224   58333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:52:07.747100   58333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0108 21:52:07.747580   58333 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:52:07.748149   58333 main.go:141] libmachine: Using API Version  1
	I0108 21:52:07.748177   58333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:52:07.748532   58333 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:52:07.748816   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .DriverName
	I0108 21:52:07.791084   58333 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 21:52:07.792571   58333 start.go:298] selected driver: kvm2
	I0108 21:52:07.792588   58333 start.go:902] validating driver "kvm2" against &{Name:kubernetes-upgrade-862639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-862639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:52:07.792699   58333 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:52:07.793360   58333 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:52:07.793442   58333 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:52:07.809703   58333 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:52:07.810125   58333 cni.go:84] Creating CNI manager for ""
	I0108 21:52:07.810144   58333 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:52:07.810156   58333 start_flags.go:323] config:
	{Name:kubernetes-upgrade-862639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-862639
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:52:07.810301   58333 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:52:07.812445   58333 out.go:177] * Starting control plane node kubernetes-upgrade-862639 in cluster kubernetes-upgrade-862639
	I0108 21:52:07.814051   58333 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 21:52:07.814096   58333 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 21:52:07.814103   58333 cache.go:56] Caching tarball of preloaded images
	I0108 21:52:07.814207   58333 preload.go:174] Found /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0108 21:52:07.814222   58333 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0108 21:52:07.814389   58333 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/config.json ...
	I0108 21:52:07.814595   58333 start.go:365] acquiring machines lock for kubernetes-upgrade-862639: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:52:07.814643   58333 start.go:369] acquired machines lock for "kubernetes-upgrade-862639" in 27.643µs
	I0108 21:52:07.814665   58333 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:52:07.814673   58333 fix.go:54] fixHost starting: 
	I0108 21:52:07.814966   58333 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:52:07.815017   58333 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:52:07.830365   58333 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0108 21:52:07.830834   58333 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:52:07.831405   58333 main.go:141] libmachine: Using API Version  1
	I0108 21:52:07.831443   58333 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:52:07.831790   58333 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:52:07.832009   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .DriverName
	I0108 21:52:07.832193   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetState
	I0108 21:52:07.834242   58333 fix.go:102] recreateIfNeeded on kubernetes-upgrade-862639: state=Running err=<nil>
	W0108 21:52:07.834262   58333 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:52:07.836615   58333 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-862639" VM ...
	I0108 21:52:07.838282   58333 machine.go:88] provisioning docker machine ...
	I0108 21:52:07.838312   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .DriverName
	I0108 21:52:07.838598   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetMachineName
	I0108 21:52:07.838910   58333 buildroot.go:166] provisioning hostname "kubernetes-upgrade-862639"
	I0108 21:52:07.838934   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetMachineName
	I0108 21:52:07.839105   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHHostname
	I0108 21:52:07.842550   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:07.843075   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:07.843113   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:07.843277   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHPort
	I0108 21:52:07.843448   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:07.843608   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:07.843766   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHUsername
	I0108 21:52:07.843918   58333 main.go:141] libmachine: Using SSH client type: native
	I0108 21:52:07.844488   58333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0108 21:52:07.844518   58333 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-862639 && echo "kubernetes-upgrade-862639" | sudo tee /etc/hostname
	I0108 21:52:08.071996   58333 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-862639
	
	I0108 21:52:08.072026   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHHostname
	I0108 21:52:08.075126   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:08.075492   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:08.075527   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:08.075799   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHPort
	I0108 21:52:08.075999   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:08.076194   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:08.076398   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHUsername
	I0108 21:52:08.076568   58333 main.go:141] libmachine: Using SSH client type: native
	I0108 21:52:08.077156   58333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0108 21:52:08.077186   58333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-862639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-862639/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-862639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:52:08.231200   58333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:52:08.231227   58333 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 21:52:08.231258   58333 buildroot.go:174] setting up certificates
	I0108 21:52:08.231283   58333 provision.go:83] configureAuth start
	I0108 21:52:08.231295   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetMachineName
	I0108 21:52:08.231587   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetIP
	I0108 21:52:08.235030   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:08.235563   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:08.235601   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:08.235959   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHHostname
	I0108 21:52:08.239011   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:08.239517   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:08.239544   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:08.239686   58333 provision.go:138] copyHostCerts
	I0108 21:52:08.239742   58333 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 21:52:08.239752   58333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 21:52:08.239808   58333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 21:52:08.239900   58333 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 21:52:08.239908   58333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 21:52:08.239931   58333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 21:52:08.239997   58333 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 21:52:08.240007   58333 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 21:52:08.240024   58333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 21:52:08.240077   58333 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-862639 san=[192.168.72.210 192.168.72.210 localhost 127.0.0.1 minikube kubernetes-upgrade-862639]
	I0108 21:52:08.376077   58333 provision.go:172] copyRemoteCerts
	I0108 21:52:08.376159   58333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:52:08.376182   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHHostname
	I0108 21:52:08.379147   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:08.379644   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:08.379694   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:08.380034   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHPort
	I0108 21:52:08.380353   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:08.380499   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHUsername
	I0108 21:52:08.380748   58333 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639/id_rsa Username:docker}
	I0108 21:52:08.486571   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:52:08.567023   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 21:52:08.642508   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0108 21:52:08.668127   58333 provision.go:86] duration metric: configureAuth took 436.829488ms
	I0108 21:52:08.668157   58333 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:52:08.668395   58333 config.go:182] Loaded profile config "kubernetes-upgrade-862639": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0108 21:52:08.668481   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHHostname
	I0108 21:52:08.671516   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:08.671933   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:08.671970   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:08.672154   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHPort
	I0108 21:52:08.672397   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:08.672598   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:08.672739   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHUsername
	I0108 21:52:08.672939   58333 main.go:141] libmachine: Using SSH client type: native
	I0108 21:52:08.673248   58333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0108 21:52:08.673269   58333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:52:10.162567   58333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:52:10.162595   58333 machine.go:91] provisioned docker machine in 2.32429464s
	I0108 21:52:10.162610   58333 start.go:300] post-start starting for "kubernetes-upgrade-862639" (driver="kvm2")
	I0108 21:52:10.162623   58333 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:52:10.162640   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .DriverName
	I0108 21:52:10.163020   58333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:52:10.163062   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHHostname
	I0108 21:52:10.166292   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:10.166667   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:10.166698   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:10.166838   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHPort
	I0108 21:52:10.167016   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:10.167185   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHUsername
	I0108 21:52:10.167300   58333 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639/id_rsa Username:docker}
	I0108 21:52:10.267460   58333 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:52:10.272064   58333 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:52:10.272116   58333 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 21:52:10.272196   58333 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 21:52:10.272276   58333 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 21:52:10.272368   58333 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:52:10.282430   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 21:52:10.308263   58333 start.go:303] post-start completed in 145.638118ms
	I0108 21:52:10.308289   58333 fix.go:56] fixHost completed within 2.493615919s
	I0108 21:52:10.308340   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHHostname
	I0108 21:52:10.311180   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:10.311606   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:10.311652   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:10.311870   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHPort
	I0108 21:52:10.312063   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:10.312264   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:10.312441   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHUsername
	I0108 21:52:10.312613   58333 main.go:141] libmachine: Using SSH client type: native
	I0108 21:52:10.313033   58333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.72.210 22 <nil> <nil>}
	I0108 21:52:10.313047   58333 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:52:10.434957   58333 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704750730.379775059
	
	I0108 21:52:10.434982   58333 fix.go:206] guest clock: 1704750730.379775059
	I0108 21:52:10.434993   58333 fix.go:219] Guest: 2024-01-08 21:52:10.379775059 +0000 UTC Remote: 2024-01-08 21:52:10.30829331 +0000 UTC m=+2.682733386 (delta=71.481749ms)
	I0108 21:52:10.435019   58333 fix.go:190] guest clock delta is within tolerance: 71.481749ms
	I0108 21:52:10.435026   58333 start.go:83] releasing machines lock for "kubernetes-upgrade-862639", held for 2.620371018s
	I0108 21:52:10.435052   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .DriverName
	I0108 21:52:10.435314   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetIP
	I0108 21:52:10.438388   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:10.438793   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:10.438824   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:10.439035   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .DriverName
	I0108 21:52:10.439705   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .DriverName
	I0108 21:52:10.439905   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .DriverName
	I0108 21:52:10.440005   58333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:52:10.440053   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHHostname
	I0108 21:52:10.440076   58333 ssh_runner.go:195] Run: cat /version.json
	I0108 21:52:10.440119   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHHostname
	I0108 21:52:10.443093   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:10.443286   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:10.443555   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:10.443584   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:10.443670   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHPort
	I0108 21:52:10.443711   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:10.443739   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:10.443840   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:10.443930   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHPort
	I0108 21:52:10.443991   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHUsername
	I0108 21:52:10.444084   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHKeyPath
	I0108 21:52:10.444178   58333 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639/id_rsa Username:docker}
	I0108 21:52:10.444242   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetSSHUsername
	I0108 21:52:10.444353   58333 sshutil.go:53] new ssh client: &{IP:192.168.72.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/kubernetes-upgrade-862639/id_rsa Username:docker}
	I0108 21:52:10.529110   58333 ssh_runner.go:195] Run: systemctl --version
	I0108 21:52:10.557429   58333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:52:10.922931   58333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 21:52:10.935429   58333 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:52:10.935513   58333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:52:10.959528   58333 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 21:52:10.959570   58333 start.go:475] detecting cgroup driver to use...
	I0108 21:52:10.959642   58333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:52:10.984777   58333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:52:11.005746   58333 docker.go:217] disabling cri-docker service (if available) ...
	I0108 21:52:11.005816   58333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:52:11.053107   58333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:52:11.180249   58333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 21:52:11.521460   58333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:52:11.765612   58333 docker.go:233] disabling docker service ...
	I0108 21:52:11.765726   58333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:52:11.798470   58333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:52:11.820035   58333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:52:12.068423   58333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:52:12.342790   58333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:52:12.378444   58333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:52:12.411945   58333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0108 21:52:12.412012   58333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:52:12.438498   58333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0108 21:52:12.438571   58333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:52:12.461456   58333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:52:12.482148   58333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:52:12.502057   58333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:52:12.521211   58333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:52:12.535932   58333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:52:12.553567   58333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:52:12.805034   58333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0108 21:52:14.372693   58333 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.567622765s)
	I0108 21:52:14.372728   58333 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0108 21:52:14.372780   58333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0108 21:52:14.377941   58333 start.go:543] Will wait 60s for crictl version
	I0108 21:52:14.378015   58333 ssh_runner.go:195] Run: which crictl
	I0108 21:52:14.382139   58333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:52:14.418952   58333 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0108 21:52:14.419039   58333 ssh_runner.go:195] Run: crio --version
	I0108 21:52:14.482369   58333 ssh_runner.go:195] Run: crio --version
	I0108 21:52:14.536416   58333 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0108 21:52:14.538030   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) Calling .GetIP
	I0108 21:52:14.540733   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:14.541096   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:ba:90", ip: ""} in network mk-kubernetes-upgrade-862639: {Iface:virbr1 ExpiryTime:2024-01-08 22:51:38 +0000 UTC Type:0 Mac:52:54:00:77:ba:90 Iaid: IPaddr:192.168.72.210 Prefix:24 Hostname:kubernetes-upgrade-862639 Clientid:01:52:54:00:77:ba:90}
	I0108 21:52:14.541149   58333 main.go:141] libmachine: (kubernetes-upgrade-862639) DBG | domain kubernetes-upgrade-862639 has defined IP address 192.168.72.210 and MAC address 52:54:00:77:ba:90 in network mk-kubernetes-upgrade-862639
	I0108 21:52:14.541322   58333 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0108 21:52:14.546459   58333 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 21:52:14.546503   58333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:52:14.601054   58333 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:52:14.601083   58333 crio.go:415] Images already preloaded, skipping extraction
	I0108 21:52:14.601132   58333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 21:52:14.641824   58333 crio.go:496] all images are preloaded for cri-o runtime.
	I0108 21:52:14.641856   58333 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:52:14.641911   58333 ssh_runner.go:195] Run: crio config
	I0108 21:52:14.700360   58333 cni.go:84] Creating CNI manager for ""
	I0108 21:52:14.700383   58333 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 21:52:14.700401   58333 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:52:14.700418   58333 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.210 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-862639 NodeName:kubernetes-upgrade-862639 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:52:14.700542   58333 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-862639"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:52:14.700604   58333 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-862639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-862639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:52:14.700655   58333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0108 21:52:14.709925   58333 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:52:14.709986   58333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:52:14.718976   58333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (390 bytes)
	I0108 21:52:14.737788   58333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0108 21:52:14.755811   58333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0108 21:52:14.775129   58333 ssh_runner.go:195] Run: grep 192.168.72.210	control-plane.minikube.internal$ /etc/hosts
	I0108 21:52:14.779313   58333 certs.go:56] Setting up /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639 for IP: 192.168.72.210
	I0108 21:52:14.779347   58333 certs.go:190] acquiring lock for shared ca certs: {Name:mke01aa9d73e320a9a3907677cf29c75f0fa86d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:52:14.779512   58333 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key
	I0108 21:52:14.779560   58333 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key
	I0108 21:52:14.779623   58333 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/client.key
	I0108 21:52:14.779677   58333 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/apiserver.key.100ac6ef
	I0108 21:52:14.779724   58333 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/proxy-client.key
	I0108 21:52:14.779850   58333 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem (1338 bytes)
	W0108 21:52:14.779889   58333 certs.go:433] ignoring /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896_empty.pem, impossibly tiny 0 bytes
	I0108 21:52:14.779900   58333 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:52:14.779920   58333 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem (1082 bytes)
	I0108 21:52:14.779945   58333 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:52:14.779971   58333 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem (1675 bytes)
	I0108 21:52:14.780019   58333 certs.go:437] found cert: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem (1708 bytes)
	I0108 21:52:14.780641   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:52:14.804780   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:52:14.828083   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:52:14.853507   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/kubernetes-upgrade-862639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:52:14.877890   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:52:14.903193   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 21:52:14.927807   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:52:14.954375   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0108 21:52:14.979575   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /usr/share/ca-certificates/178962.pem (1708 bytes)
	I0108 21:52:15.004423   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:52:15.029547   58333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/17896.pem --> /usr/share/ca-certificates/17896.pem (1338 bytes)
	I0108 21:52:15.057052   58333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:52:15.074131   58333 ssh_runner.go:195] Run: openssl version
	I0108 21:52:15.080482   58333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/178962.pem && ln -fs /usr/share/ca-certificates/178962.pem /etc/ssl/certs/178962.pem"
	I0108 21:52:15.092305   58333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/178962.pem
	I0108 21:52:15.097627   58333 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:22 /usr/share/ca-certificates/178962.pem
	I0108 21:52:15.097692   58333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/178962.pem
	I0108 21:52:15.104220   58333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/178962.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:52:15.113487   58333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:52:15.124809   58333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:52:15.129737   58333 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:12 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:52:15.129809   58333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:52:15.135787   58333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:52:15.144570   58333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17896.pem && ln -fs /usr/share/ca-certificates/17896.pem /etc/ssl/certs/17896.pem"
	I0108 21:52:15.155146   58333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17896.pem
	I0108 21:52:15.160899   58333 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:22 /usr/share/ca-certificates/17896.pem
	I0108 21:52:15.160957   58333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17896.pem
	I0108 21:52:15.167382   58333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17896.pem /etc/ssl/certs/51391683.0"
	I0108 21:52:15.176209   58333 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:52:15.181300   58333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0108 21:52:15.187871   58333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0108 21:52:15.194172   58333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0108 21:52:15.201930   58333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0108 21:52:15.209695   58333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0108 21:52:15.216065   58333 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0108 21:52:15.222518   58333 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-862639 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-862639 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.210 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 21:52:15.222628   58333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0108 21:52:15.222681   58333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 21:52:15.273049   58333 cri.go:89] found id: "684c5ef5c3750e1b0cff02f2f551c9ea0f7a8e8c8d3e25f3880893eaca3d08b4"
	I0108 21:52:15.273078   58333 cri.go:89] found id: "3813c71faa2ec66552ad5151b31aec14326cc418bb36b70c0a80c56edcba3630"
	I0108 21:52:15.273085   58333 cri.go:89] found id: "9f94df915b9cbf12663247650e58df4677d7ead59ed822cf85c54a1166dcd8f5"
	I0108 21:52:15.273091   58333 cri.go:89] found id: "874ceb74f6e51b416c2c5a52fb47b5b843dcf0df7ce7dedda0e2373ec817b8f0"
	I0108 21:52:15.273096   58333 cri.go:89] found id: "f2270d09a7c617e78f310b927e5b13ebeac88a5372575d05ad3f433a89138545"
	I0108 21:52:15.273101   58333 cri.go:89] found id: "f9abfa43bb94f77952c9baa97d2f1b97e7fa07d14a744ba47e922e9c939c6182"
	I0108 21:52:15.273106   58333 cri.go:89] found id: "a83b2d6e8275b5ea3154e9a294e5e29a1c7fd1f42fa8acd11e1b4c6049ac758c"
	I0108 21:52:15.273113   58333 cri.go:89] found id: ""
	I0108 21:52:15.273166   58333 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	-- Journal begins at Mon 2024-01-08 21:36:02 UTC, ends at Mon 2024-01-08 21:52:26 UTC. --
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.015399353Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257\"" file="storage/storage_transport.go:185"
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.015525975Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591\"" file="storage/storage_transport.go:185"
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.015601698Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1\"" file="storage/storage_transport.go:185"
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.015728534Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e\"" file="storage/storage_transport.go:185"
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.015844024Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" file="storage/storage_transport.go:185"
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.016281865Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" file="storage/storage_transport.go:185"
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.016403845Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" file="storage/storage_transport.go:185"
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.016480599Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562\"" file="storage/storage_transport.go:185"
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.016592208Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc\"" file="storage/storage_transport.go:185"
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.016681064Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"" file="storage/storage_transport.go:185"
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.016879019Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.4],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499 registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb],Size_:127226832,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.4],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232],Size_:123261750,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:e3db313c6dbc065
d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,RepoTags:[registry.k8s.io/kube-scheduler:v1.28.4],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32],Size_:61551410,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,RepoTags:[registry.k8s.io/kube-proxy:v1.28.4],RepoDigests:[registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532],Size_:74749335,Uid:nil,Username:,Spec:nil,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd280
01e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,RepoTags:[registry.k8s.io/etcd:3.5.9-0],RepoDigests:[registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15 registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3],Size_:295456551,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:
[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,RepoTags:[docker.io/kindest/kindnetd:v20230809-80a64d96],RepoDigests:[docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052 docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4],Size_:65258016,Uid:nil,Username:,Spec:nil,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Use
rname:,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=09054e64-b8f0-432c-90fe-b7b4040e094a name=/runtime.v1.ImageService/ListImages
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.080365322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8a75f721-692e-4418-9277-c3757c5bc5ec name=/runtime.v1.RuntimeService/Version
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.080458416Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8a75f721-692e-4418-9277-c3757c5bc5ec name=/runtime.v1.RuntimeService/Version
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.090252259Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=eba3ce82-520a-40dd-b1e1-be55df5dd980 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.091456922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750746091423886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=eba3ce82-520a-40dd-b1e1-be55df5dd980 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.092881851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c7d43b95-58df-4a27-9140-43b13b76e4ea name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.093059024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c7d43b95-58df-4a27-9140-43b13b76e4ea name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.093270337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749831065446353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-8048-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95ea6fd0defe771f009bd79c5348511ad75bea05732ff1a2b816bd58eeba1b3d,PodSandboxId:6565297492e79f3df1c9a4130be7c007460cc922548e6b1a925e21959516e31d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749811345129715,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ccaabb4-5810-420a-af04-4ea75d328791,},Annotations:map[string]string{io.kubernetes.container.hash: 25005988,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320,PodSandboxId:367f2c30b7577933f36f5a2a6d14047516b4f5fe0b4a76be88be2485ef0ba7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749808271567935,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jlpx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3128151-c8ce-44da-a192-3b4a2ae1e3f8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b6d9076,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1,PodSandboxId:9036c894ad3404e927665c586bee01b9ede100a62ffc307204856217d014b025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749800890222174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qs2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed301cf2-
3f54-4b4c-880b-2fe829c81093,},Annotations:map[string]string{io.kubernetes.container.hash: 2132f592,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749800681470370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-80
48-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e,PodSandboxId:5e121919d98f68d6e6eecf7e6a5f19a99fc772531c8f714e0d96d8ce36262730,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749795499108360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d5e6f3e4eb948f415b8d1bf28546aa,},Annotations:map[string
]string{io.kubernetes.container.hash: efae481c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b,PodSandboxId:c6b0d8cddd1cfff0e8612365aba34c5d49a858d8a20ea2a9b341df852d440364,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749795230849821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4caeedcddcdec781bbb93408f1e0287,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267,PodSandboxId:c4af4d7b9cd0ce7d71fbfdc32c9c8676427ff1d2d3a42ab06f07b01bcba93121,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749795075583754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25852399f68db47cb85b5f113983dded,},Annotations:map[string]string{io.kubernete
s.container.hash: 27814462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87,PodSandboxId:e653836694ad618561fe4fe96d87e01f876dfc37e1929f04c3e83912b9b6f5b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749794807236128,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f648c750c1fcf7ff3a889e684ae9738a,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c7d43b95-58df-4a27-9140-43b13b76e4ea name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.147256434Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8092d2a9-c4db-4693-87f3-c59aad5af18f name=/runtime.v1.RuntimeService/Version
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.147348384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8092d2a9-c4db-4693-87f3-c59aad5af18f name=/runtime.v1.RuntimeService/Version
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.149755773Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=88bccb97-a65c-4d49-840d-3aee2020c858 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.150437086Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1704750746150410326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=88bccb97-a65c-4d49-840d-3aee2020c858 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.151609066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=57df3ec5-0757-4995-a8b6-dcc449ccf8ed name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.151818668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=57df3ec5-0757-4995-a8b6-dcc449ccf8ed name=/runtime.v1.RuntimeService/ListContainers
	Jan 08 21:52:26 embed-certs-930023 crio[727]: time="2024-01-08 21:52:26.152200411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1704749831065446353,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-8048-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 2,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95ea6fd0defe771f009bd79c5348511ad75bea05732ff1a2b816bd58eeba1b3d,PodSandboxId:6565297492e79f3df1c9a4130be7c007460cc922548e6b1a925e21959516e31d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1704749811345129715,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ccaabb4-5810-420a-af04-4ea75d328791,},Annotations:map[string]string{io.kubernetes.container.hash: 25005988,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320,PodSandboxId:367f2c30b7577933f36f5a2a6d14047516b4f5fe0b4a76be88be2485ef0ba7d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1704749808271567935,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jlpx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3128151-c8ce-44da-a192-3b4a2ae1e3f8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b6d9076,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1,PodSandboxId:9036c894ad3404e927665c586bee01b9ede100a62ffc307204856217d014b025,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1704749800890222174,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qs2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed301cf2-
3f54-4b4c-880b-2fe829c81093,},Annotations:map[string]string{io.kubernetes.container.hash: 2132f592,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5,PodSandboxId:4fbeb031951ac718803f84c1d202280a94e3361a88fe22bfe7da38e8daf08b76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1704749800681470370,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ef46fa1-80
48-4f26-b999-6b78c5450cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 68872a5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e,PodSandboxId:5e121919d98f68d6e6eecf7e6a5f19a99fc772531c8f714e0d96d8ce36262730,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1704749795499108360,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76d5e6f3e4eb948f415b8d1bf28546aa,},Annotations:map[string
]string{io.kubernetes.container.hash: efae481c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b,PodSandboxId:c6b0d8cddd1cfff0e8612365aba34c5d49a858d8a20ea2a9b341df852d440364,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1704749795230849821,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4caeedcddcdec781bbb93408f1e0287,},Annotations:map[string]string{io
.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267,PodSandboxId:c4af4d7b9cd0ce7d71fbfdc32c9c8676427ff1d2d3a42ab06f07b01bcba93121,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1704749795075583754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25852399f68db47cb85b5f113983dded,},Annotations:map[string]string{io.kubernete
s.container.hash: 27814462,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87,PodSandboxId:e653836694ad618561fe4fe96d87e01f876dfc37e1929f04c3e83912b9b6f5b5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1704749794807236128,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-930023,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f648c750c1fcf7ff3a889e684ae9738a,},Annotations:map[
string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=57df3ec5-0757-4995-a8b6-dcc449ccf8ed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60dc1219493a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      15 minutes ago      Running             storage-provisioner       2                   4fbeb031951ac       storage-provisioner
	95ea6fd0defe7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   15 minutes ago      Running             busybox                   1                   6565297492e79       busybox
	040312a16e063       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      15 minutes ago      Running             coredns                   1                   367f2c30b7577       coredns-5dd5756b68-jlpx5
	ec5e034aaa19f       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      15 minutes ago      Running             kube-proxy                1                   9036c894ad340       kube-proxy-8qs2r
	82b4cf0190ce0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      15 minutes ago      Exited              storage-provisioner       1                   4fbeb031951ac       storage-provisioner
	07d60f2b2378b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      15 minutes ago      Running             etcd                      1                   5e121919d98f6       etcd-embed-certs-930023
	18264b7b5f911       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      15 minutes ago      Running             kube-scheduler            1                   c6b0d8cddd1cf       kube-scheduler-embed-certs-930023
	aab0e15e7d8be       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      15 minutes ago      Running             kube-apiserver            1                   c4af4d7b9cd0c       kube-apiserver-embed-certs-930023
	3722917aa56b0       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      15 minutes ago      Running             kube-controller-manager   1                   e653836694ad6       kube-controller-manager-embed-certs-930023
	
	
	==> coredns [040312a16e063a73f44751d3097f10fad18fe5178a6479510cf88164b83cf320] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50305 - 14687 "HINFO IN 594706197751516603.1586272236687783089. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015082507s
	
	
	==> describe nodes <==
	Name:               embed-certs-930023
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-930023
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=255792ad75c0218cbe9d2c7121633a1b1d442f28
	                    minikube.k8s.io/name=embed-certs-930023
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_27_48_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:27:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-930023
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:52:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:47:23 +0000   Mon, 08 Jan 2024 21:27:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:47:23 +0000   Mon, 08 Jan 2024 21:27:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:47:23 +0000   Mon, 08 Jan 2024 21:27:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:47:23 +0000   Mon, 08 Jan 2024 21:36:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.142
	  Hostname:    embed-certs-930023
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2804a5b84d73408e9397b4caab2b5e2d
	  System UUID:                2804a5b8-4d73-408e-9397-b4caab2b5e2d
	  Boot ID:                    cfbceab9-05b0-4b7e-960d-291223a439c9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 coredns-5dd5756b68-jlpx5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     24m
	  kube-system                 etcd-embed-certs-930023                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kube-apiserver-embed-certs-930023             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-embed-certs-930023    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-8qs2r                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-embed-certs-930023             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 metrics-server-57f55c9bc5-rj499               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node embed-certs-930023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node embed-certs-930023 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node embed-certs-930023 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     24m                kubelet          Node embed-certs-930023 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  24m                kubelet          Node embed-certs-930023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m                kubelet          Node embed-certs-930023 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                24m                kubelet          Node embed-certs-930023 status is now: NodeReady
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           24m                node-controller  Node embed-certs-930023 event: Registered Node embed-certs-930023 in Controller
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-930023 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-930023 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-930023 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-930023 event: Registered Node embed-certs-930023 in Controller
	
	
	==> dmesg <==
	[Jan 8 21:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068858] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.692755] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jan 8 21:36] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.134220] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.608740] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.965119] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.116396] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.145997] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.105066] systemd-fstab-generator[687]: Ignoring "noauto" for root device
	[  +0.252224] systemd-fstab-generator[711]: Ignoring "noauto" for root device
	[ +18.001318] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[ +14.084638] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [07d60f2b2378b8de4229e3984f98b36980f69241be12e408c3d5099cb44e9f2e] <==
	{"level":"info","ts":"2024-01-08T21:36:37.813815Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d7a5d3e20a6b0ba7","local-member-attributes":"{Name:embed-certs-930023 ClientURLs:[https://192.168.39.142:2379]}","request-path":"/0/members/d7a5d3e20a6b0ba7/attributes","cluster-id":"f7d6b5428c0c9dc0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:36:37.82207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:36:37.822092Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-01-08T21:42:26.90789Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.985899ms","expected-duration":"100ms","prefix":"","request":"header:<ID:839794796499238848 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-930023\" mod_revision:861 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-930023\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-930023\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-01-08T21:42:26.908617Z","caller":"traceutil/trace.go:171","msg":"trace[2128994375] transaction","detail":"{read_only:false; response_revision:871; number_of_response:1; }","duration":"295.963181ms","start":"2024-01-08T21:42:26.612616Z","end":"2024-01-08T21:42:26.908579Z","steps":["trace[2128994375] 'process raft request'  (duration: 36.183862ms)","trace[2128994375] 'compare'  (duration: 257.903032ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:42:27.150561Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.512658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:42:27.150765Z","caller":"traceutil/trace.go:171","msg":"trace[583155111] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:871; }","duration":"125.717037ms","start":"2024-01-08T21:42:27.025014Z","end":"2024-01-08T21:42:27.150731Z","steps":["trace[583155111] 'range keys from in-memory index tree'  (duration: 125.367851ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:42:28.83914Z","caller":"traceutil/trace.go:171","msg":"trace[306325198] transaction","detail":"{read_only:false; response_revision:872; number_of_response:1; }","duration":"247.335608ms","start":"2024-01-08T21:42:28.591778Z","end":"2024-01-08T21:42:28.839114Z","steps":["trace[306325198] 'process raft request'  (duration: 246.889218ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:46:37.85256Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":829}
	{"level":"info","ts":"2024-01-08T21:46:37.856128Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":829,"took":"3.231177ms","hash":3070780785}
	{"level":"info","ts":"2024-01-08T21:46:37.856198Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3070780785,"revision":829,"compact-revision":-1}
	{"level":"info","ts":"2024-01-08T21:50:15.752688Z","caller":"traceutil/trace.go:171","msg":"trace[1614511165] linearizableReadLoop","detail":"{readStateIndex:1448; appliedIndex:1447; }","duration":"151.269687ms","start":"2024-01-08T21:50:15.601366Z","end":"2024-01-08T21:50:15.752636Z","steps":["trace[1614511165] 'read index received'  (duration: 151.103976ms)","trace[1614511165] 'applied index is now lower than readState.Index'  (duration: 164.978µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:50:15.753222Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.754743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-01-08T21:50:15.753321Z","caller":"traceutil/trace.go:171","msg":"trace[2048622282] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:1249; }","duration":"151.957945ms","start":"2024-01-08T21:50:15.601342Z","end":"2024-01-08T21:50:15.7533Z","steps":["trace[2048622282] 'agreement among raft nodes before linearized reading'  (duration: 151.663886ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:50:15.753576Z","caller":"traceutil/trace.go:171","msg":"trace[1066800639] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"163.375371ms","start":"2024-01-08T21:50:15.590117Z","end":"2024-01-08T21:50:15.753493Z","steps":["trace[1066800639] 'process raft request'  (duration: 162.405444ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:50:44.11402Z","caller":"traceutil/trace.go:171","msg":"trace[278334439] transaction","detail":"{read_only:false; response_revision:1271; number_of_response:1; }","duration":"176.364349ms","start":"2024-01-08T21:50:43.937617Z","end":"2024-01-08T21:50:44.113981Z","steps":["trace[278334439] 'process raft request'  (duration: 174.988618ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:50:44.337337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.550446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:50:44.337493Z","caller":"traceutil/trace.go:171","msg":"trace[2086171443] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1271; }","duration":"107.786417ms","start":"2024-01-08T21:50:44.229681Z","end":"2024-01-08T21:50:44.337467Z","steps":["trace[2086171443] 'range keys from in-memory index tree'  (duration: 107.47739ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:51:07.317092Z","caller":"traceutil/trace.go:171","msg":"trace[1309733402] transaction","detail":"{read_only:false; response_revision:1290; number_of_response:1; }","duration":"123.603142ms","start":"2024-01-08T21:51:07.193264Z","end":"2024-01-08T21:51:07.316867Z","steps":["trace[1309733402] 'process raft request'  (duration: 123.066444ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:51:08.704544Z","caller":"traceutil/trace.go:171","msg":"trace[110368659] transaction","detail":"{read_only:false; response_revision:1291; number_of_response:1; }","duration":"429.221539ms","start":"2024-01-08T21:51:08.2753Z","end":"2024-01-08T21:51:08.704522Z","steps":["trace[110368659] 'process raft request'  (duration: 429.06638ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:51:08.705655Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T21:51:08.275283Z","time spent":"430.191446ms","remote":"127.0.0.1:58546","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1288 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-08T21:51:37.862263Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1071}
	{"level":"info","ts":"2024-01-08T21:51:37.863829Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1071,"took":"1.24436ms","hash":4001124734}
	{"level":"info","ts":"2024-01-08T21:51:37.863867Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4001124734,"revision":1071,"compact-revision":829}
	{"level":"info","ts":"2024-01-08T21:52:10.899213Z","caller":"traceutil/trace.go:171","msg":"trace[2034011757] transaction","detail":"{read_only:false; response_revision:1342; number_of_response:1; }","duration":"143.990357ms","start":"2024-01-08T21:52:10.755186Z","end":"2024-01-08T21:52:10.899176Z","steps":["trace[2034011757] 'process raft request'  (duration: 76.401802ms)","trace[2034011757] 'compare'  (duration: 66.958463ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:52:26 up 16 min,  0 users,  load average: 0.14, 0.20, 0.18
	Linux embed-certs-930023 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [aab0e15e7d8becf75ac7f1fb04e6b8b51bab129034c962031d452ada6a87e267] <==
	E0108 21:47:40.450315       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:47:40.450321       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:48:39.293695       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 21:49:39.293465       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 21:49:40.449486       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:49:40.449603       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:49:40.449632       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:49:40.450672       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:49:40.450773       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:49:40.450781       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 21:50:39.293870       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 21:51:39.293604       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 21:51:39.455044       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:51:39.455188       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:51:39.455740       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0108 21:51:40.455998       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:51:40.456144       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 21:51:40.456185       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0108 21:51:40.456023       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 21:51:40.456271       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0108 21:51:40.457611       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3722917aa56b0b1ac22bfe05670254f12fe94d8935a983effb79cd8ed1fc1f87] <==
	I0108 21:46:52.494998       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:47:21.989279       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:47:22.503065       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 21:47:44.868358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="283.717µs"
	E0108 21:47:51.995557       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:47:52.511293       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0108 21:47:55.875486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="205.375µs"
	E0108 21:48:22.001201       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:48:22.526022       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:48:52.007458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:48:52.534764       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:49:22.012715       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:49:22.544386       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:49:52.020126       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:49:52.553083       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:50:22.028160       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:50:22.562369       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:50:52.038251       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:50:52.575385       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:51:22.045620       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:51:22.585029       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:51:52.055501       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:51:52.595899       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0108 21:52:22.063559       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0108 21:52:22.606534       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ec5e034aaa19f490cff5196eea544ec7d39f4ff16f727b84269c4802591df0e1] <==
	I0108 21:36:41.178295       1 server_others.go:69] "Using iptables proxy"
	I0108 21:36:41.203426       1 node.go:141] Successfully retrieved node IP: 192.168.39.142
	I0108 21:36:41.281270       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:36:41.281344       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:36:41.285510       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:36:41.285589       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:36:41.285817       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:36:41.285864       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:41.287162       1 config.go:188] "Starting service config controller"
	I0108 21:36:41.287229       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:36:41.287284       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:36:41.287327       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:36:41.288134       1 config.go:315] "Starting node config controller"
	I0108 21:36:41.288176       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:36:41.388118       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:36:41.388236       1 shared_informer.go:318] Caches are synced for node config
	I0108 21:36:41.388289       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [18264b7b5f91170d1dfce83a81132d57c3500af04767f8e529af9281854bfc7b] <==
	I0108 21:36:37.604237       1 serving.go:348] Generated self-signed cert in-memory
	W0108 21:36:39.395532       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0108 21:36:39.395622       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:36:39.395651       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0108 21:36:39.395678       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0108 21:36:39.461001       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0108 21:36:39.461090       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:36:39.462707       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0108 21:36:39.462758       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0108 21:36:39.463347       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0108 21:36:39.463443       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0108 21:36:39.563571       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:36:02 UTC, ends at Mon 2024-01-08 21:52:27 UTC. --
	Jan 08 21:49:33 embed-certs-930023 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:49:33 embed-certs-930023 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:49:33 embed-certs-930023 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:49:40 embed-certs-930023 kubelet[932]: E0108 21:49:40.850994     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:49:54 embed-certs-930023 kubelet[932]: E0108 21:49:54.850267     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:50:05 embed-certs-930023 kubelet[932]: E0108 21:50:05.850415     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:50:20 embed-certs-930023 kubelet[932]: E0108 21:50:20.851832     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:50:33 embed-certs-930023 kubelet[932]: E0108 21:50:33.866858     932 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:50:33 embed-certs-930023 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:50:33 embed-certs-930023 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:50:33 embed-certs-930023 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:50:35 embed-certs-930023 kubelet[932]: E0108 21:50:35.852188     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:50:49 embed-certs-930023 kubelet[932]: E0108 21:50:49.851188     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:51:02 embed-certs-930023 kubelet[932]: E0108 21:51:02.851764     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:51:16 embed-certs-930023 kubelet[932]: E0108 21:51:16.851726     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:51:27 embed-certs-930023 kubelet[932]: E0108 21:51:27.851054     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:51:33 embed-certs-930023 kubelet[932]: E0108 21:51:33.835754     932 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Jan 08 21:51:33 embed-certs-930023 kubelet[932]: E0108 21:51:33.869036     932 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:51:33 embed-certs-930023 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:51:33 embed-certs-930023 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:51:33 embed-certs-930023 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:51:39 embed-certs-930023 kubelet[932]: E0108 21:51:39.851681     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:51:50 embed-certs-930023 kubelet[932]: E0108 21:51:50.851635     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:52:05 embed-certs-930023 kubelet[932]: E0108 21:52:05.852180     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	Jan 08 21:52:19 embed-certs-930023 kubelet[932]: E0108 21:52:19.851377     932 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rj499" podUID="5873675f-8a6c-4404-be01-b46763a62f5c"
	
	
	==> storage-provisioner [60dc1219493a9abe8ecd8d401fe567b705bb8c578107cbd71c570b8b59acb16c] <==
	I0108 21:37:11.203372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 21:37:11.224836       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 21:37:11.225578       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 21:37:28.633796       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 21:37:28.634145       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-930023_ab9dd66f-15b7-4c6d-855b-312e7052f765!
	I0108 21:37:28.635432       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"730647eb-bf8f-4237-87b0-8860cd3b96c5", APIVersion:"v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-930023_ab9dd66f-15b7-4c6d-855b-312e7052f765 became leader
	I0108 21:37:28.735420       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-930023_ab9dd66f-15b7-4c6d-855b-312e7052f765!
	
	
	==> storage-provisioner [82b4cf0190ce01d6c48381aa6254032c2b3d422ba1df20f1cbe8b5c91d6aaee5] <==
	I0108 21:36:40.872322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0108 21:37:10.874414       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0108 21:52:25.506728   58539 logs.go:266] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17907-10702/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-930023 -n embed-certs-930023
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-930023 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-rj499
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-930023 describe pod metrics-server-57f55c9bc5-rj499
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-930023 describe pod metrics-server-57f55c9bc5-rj499: exit status 1 (95.789873ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rj499" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-930023 describe pod metrics-server-57f55c9bc5-rj499: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (135.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (283.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.3180319122.exe start -p stopped-upgrade-716145 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0108 21:51:51.620824   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.3180319122.exe start -p stopped-upgrade-716145 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m14.77688423s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.3180319122.exe -p stopped-upgrade-716145 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.3180319122.exe -p stopped-upgrade-716145 stop: (1m33.189295426s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-716145 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0108 21:55:32.533915   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-716145 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (55.827305111s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-716145] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-716145 in cluster stopped-upgrade-716145
	* Restarting existing kvm2 VM for "stopped-upgrade-716145" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:55:24.958486   63205 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:55:24.958648   63205 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:55:24.958660   63205 out.go:309] Setting ErrFile to fd 2...
	I0108 21:55:24.958668   63205 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:55:24.958995   63205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:55:24.959785   63205 out.go:303] Setting JSON to false
	I0108 21:55:24.961171   63205 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9449,"bootTime":1704741476,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:55:24.961262   63205 start.go:138] virtualization: kvm guest
	I0108 21:55:24.963917   63205 out.go:177] * [stopped-upgrade-716145] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:55:24.965441   63205 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:55:24.965436   63205 notify.go:220] Checking for updates...
	I0108 21:55:24.967015   63205 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:55:24.968823   63205 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:55:24.970480   63205 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:55:24.971895   63205 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:55:24.973843   63205 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:55:24.975766   63205 config.go:182] Loaded profile config "stopped-upgrade-716145": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 21:55:24.975782   63205 start_flags.go:694] config upgrade: Driver=kvm2
	I0108 21:55:24.975793   63205 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c
	I0108 21:55:24.975883   63205 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/stopped-upgrade-716145/config.json ...
	I0108 21:55:24.976489   63205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:55:24.976541   63205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:55:24.991608   63205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I0108 21:55:24.992043   63205 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:55:24.992659   63205 main.go:141] libmachine: Using API Version  1
	I0108 21:55:24.992685   63205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:55:24.992988   63205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:55:24.993202   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .DriverName
	I0108 21:55:24.996646   63205 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0108 21:55:24.998312   63205 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:55:24.998752   63205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:55:24.998804   63205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:55:25.014232   63205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I0108 21:55:25.014695   63205 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:55:25.015190   63205 main.go:141] libmachine: Using API Version  1
	I0108 21:55:25.015210   63205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:55:25.015567   63205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:55:25.015753   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .DriverName
	I0108 21:55:25.058824   63205 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 21:55:25.060437   63205 start.go:298] selected driver: kvm2
	I0108 21:55:25.060470   63205 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-716145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.187 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 21:55:25.060598   63205 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:55:25.061608   63205 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:55:25.061695   63205 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:55:25.081050   63205 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:55:25.081516   63205 cni.go:84] Creating CNI manager for ""
	I0108 21:55:25.081541   63205 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0108 21:55:25.081553   63205 start_flags.go:323] config:
	{Name:stopped-upgrade-716145 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.61.187 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I0108 21:55:25.081760   63205 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:55:25.084080   63205 out.go:177] * Starting control plane node stopped-upgrade-716145 in cluster stopped-upgrade-716145
	I0108 21:55:25.085530   63205 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0108 21:55:25.543961   63205 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0108 21:55:25.544152   63205 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/stopped-upgrade-716145/config.json ...
	I0108 21:55:25.544258   63205 cache.go:107] acquiring lock: {Name:mk404ee59d151f42edf5b0bb65897bb384427ec6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:55:25.544294   63205 cache.go:107] acquiring lock: {Name:mk65389ddcd499e05451b4ba07b5887fde683f25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:55:25.544295   63205 cache.go:107] acquiring lock: {Name:mka841fe0ca90530e95adda70e575bf96a6fa659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:55:25.544331   63205 cache.go:107] acquiring lock: {Name:mkeb3e7e4793a65991e84bd10e24abf147a4d51a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:55:25.544317   63205 cache.go:107] acquiring lock: {Name:mk1e6c735aae94af16a2e2bf6ff299b004c771f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:55:25.544356   63205 cache.go:107] acquiring lock: {Name:mk1bac41a2910c6e144ea55b3470102402a1bfda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:55:25.544379   63205 cache.go:107] acquiring lock: {Name:mk6ffccac4c858f5ee7d8c1ef59b5ce6772c4de9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:55:25.544394   63205 cache.go:115] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0108 21:55:25.544405   63205 cache.go:115] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0108 21:55:25.544405   63205 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 74.257µs
	I0108 21:55:25.544415   63205 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0108 21:55:25.544402   63205 cache.go:107] acquiring lock: {Name:mk74c6e324c6e41d154535f7e724b46548b36d70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:55:25.544417   63205 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 112.436µs
	I0108 21:55:25.544424   63205 cache.go:115] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0108 21:55:25.544386   63205 cache.go:115] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0108 21:55:25.544429   63205 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0108 21:55:25.544434   63205 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 58.559µs
	I0108 21:55:25.544441   63205 cache.go:115] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0108 21:55:25.544443   63205 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0108 21:55:25.544446   63205 cache.go:115] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0108 21:55:25.544447   63205 cache.go:115] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0108 21:55:25.544450   63205 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 121.555µs
	I0108 21:55:25.544459   63205 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0108 21:55:25.544455   63205 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 207.83µs
	I0108 21:55:25.544468   63205 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0108 21:55:25.544442   63205 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 170.023µs
	I0108 21:55:25.544476   63205 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0108 21:55:25.544386   63205 cache.go:115] /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0108 21:55:25.544485   63205 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 208.664µs
	I0108 21:55:25.544493   63205 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0108 21:55:25.544455   63205 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 58.044µs
	I0108 21:55:25.544491   63205 start.go:365] acquiring machines lock for stopped-upgrade-716145: {Name:mk827908c3e5a4c7c775c42e2a2e4218ad445715 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:55:25.544501   63205 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0108 21:55:25.544511   63205 cache.go:87] Successfully saved all images to host disk.
	I0108 21:55:35.335241   63205 start.go:369] acquired machines lock for "stopped-upgrade-716145" in 9.790725371s
	I0108 21:55:35.335289   63205 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:55:35.335302   63205 fix.go:54] fixHost starting: minikube
	I0108 21:55:35.337145   63205 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 21:55:35.337267   63205 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:55:35.359132   63205 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38957
	I0108 21:55:35.359591   63205 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:55:35.360285   63205 main.go:141] libmachine: Using API Version  1
	I0108 21:55:35.360321   63205 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:55:35.360734   63205 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:55:35.360909   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .DriverName
	I0108 21:55:35.361065   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetState
	I0108 21:55:35.363115   63205 fix.go:102] recreateIfNeeded on stopped-upgrade-716145: state=Stopped err=<nil>
	I0108 21:55:35.363184   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .DriverName
	W0108 21:55:35.363442   63205 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:55:35.365353   63205 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-716145" ...
	I0108 21:55:35.366674   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .Start
	I0108 21:55:35.367085   63205 main.go:141] libmachine: (stopped-upgrade-716145) Ensuring networks are active...
	I0108 21:55:35.368086   63205 main.go:141] libmachine: (stopped-upgrade-716145) Ensuring network default is active
	I0108 21:55:35.368500   63205 main.go:141] libmachine: (stopped-upgrade-716145) Ensuring network minikube-net is active
	I0108 21:55:35.369451   63205 main.go:141] libmachine: (stopped-upgrade-716145) Getting domain xml...
	I0108 21:55:35.370234   63205 main.go:141] libmachine: (stopped-upgrade-716145) Creating domain...
	I0108 21:55:37.599995   63205 main.go:141] libmachine: (stopped-upgrade-716145) Waiting to get IP...
	I0108 21:55:37.600926   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:37.601440   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:37.601507   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:37.601415   64438 retry.go:31] will retry after 298.664443ms: waiting for machine to come up
	I0108 21:55:37.902075   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:37.903190   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:37.903219   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:37.903148   64438 retry.go:31] will retry after 368.089953ms: waiting for machine to come up
	I0108 21:55:38.272854   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:38.273574   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:38.273606   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:38.273521   64438 retry.go:31] will retry after 312.010238ms: waiting for machine to come up
	I0108 21:55:38.586997   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:38.587567   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:38.587599   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:38.587486   64438 retry.go:31] will retry after 491.494621ms: waiting for machine to come up
	I0108 21:55:39.080291   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:39.080785   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:39.080810   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:39.080742   64438 retry.go:31] will retry after 681.292833ms: waiting for machine to come up
	I0108 21:55:39.763845   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:39.764494   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:39.764542   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:39.764424   64438 retry.go:31] will retry after 947.223133ms: waiting for machine to come up
	I0108 21:55:40.713330   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:40.713915   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:40.713951   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:40.713850   64438 retry.go:31] will retry after 761.304124ms: waiting for machine to come up
	I0108 21:55:41.476894   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:41.477496   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:41.477517   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:41.477474   64438 retry.go:31] will retry after 1.102501431s: waiting for machine to come up
	I0108 21:55:42.582075   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:42.582618   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:42.582653   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:42.582570   64438 retry.go:31] will retry after 1.238086898s: waiting for machine to come up
	I0108 21:55:43.823153   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:43.823693   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:43.823723   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:43.823648   64438 retry.go:31] will retry after 1.928543221s: waiting for machine to come up
	I0108 21:55:45.754142   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:45.754633   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:45.754654   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:45.754586   64438 retry.go:31] will retry after 2.851412276s: waiting for machine to come up
	I0108 21:55:48.609499   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:48.609961   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:48.609999   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:48.609914   64438 retry.go:31] will retry after 3.303784782s: waiting for machine to come up
	I0108 21:55:51.915780   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:51.916321   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:51.916344   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:51.916278   64438 retry.go:31] will retry after 2.814279516s: waiting for machine to come up
	I0108 21:55:54.733731   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:54.734370   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:54.734399   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:54.734321   64438 retry.go:31] will retry after 4.508058224s: waiting for machine to come up
	I0108 21:55:59.245374   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:55:59.245912   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:55:59.245936   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:55:59.245865   64438 retry.go:31] will retry after 5.87447314s: waiting for machine to come up
	I0108 21:56:05.125138   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:05.125529   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | unable to find current IP address of domain stopped-upgrade-716145 in network minikube-net
	I0108 21:56:05.125550   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | I0108 21:56:05.125495   64438 retry.go:31] will retry after 7.103105631s: waiting for machine to come up
	I0108 21:56:12.230520   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.231120   63205 main.go:141] libmachine: (stopped-upgrade-716145) Found IP for machine: 192.168.61.187
	I0108 21:56:12.231156   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has current primary IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.231169   63205 main.go:141] libmachine: (stopped-upgrade-716145) Reserving static IP address...
	I0108 21:56:12.231629   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "stopped-upgrade-716145", mac: "52:54:00:35:ee:e3", ip: "192.168.61.187"} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:12.231668   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-716145", mac: "52:54:00:35:ee:e3", ip: "192.168.61.187"}
	I0108 21:56:12.231689   63205 main.go:141] libmachine: (stopped-upgrade-716145) Reserved static IP address: 192.168.61.187
	I0108 21:56:12.231705   63205 main.go:141] libmachine: (stopped-upgrade-716145) Waiting for SSH to be available...
	I0108 21:56:12.231718   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | Getting to WaitForSSH function...
	I0108 21:56:12.234404   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.234807   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:12.234839   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.234870   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | Using SSH client type: external
	I0108 21:56:12.234904   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | Using SSH private key: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/stopped-upgrade-716145/id_rsa (-rw-------)
	I0108 21:56:12.234978   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17907-10702/.minikube/machines/stopped-upgrade-716145/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:56:12.235002   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | About to run SSH command:
	I0108 21:56:12.235019   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | exit 0
	I0108 21:56:12.371742   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | SSH cmd err, output: <nil>: 
	I0108 21:56:12.372175   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetConfigRaw
	I0108 21:56:12.376700   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetIP
	I0108 21:56:12.379612   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.380123   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:12.380154   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.380362   63205 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/stopped-upgrade-716145/config.json ...
	I0108 21:56:12.380566   63205 machine.go:88] provisioning docker machine ...
	I0108 21:56:12.380585   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .DriverName
	I0108 21:56:12.380818   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetMachineName
	I0108 21:56:12.380997   63205 buildroot.go:166] provisioning hostname "stopped-upgrade-716145"
	I0108 21:56:12.381018   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetMachineName
	I0108 21:56:12.381173   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHHostname
	I0108 21:56:12.384121   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.384522   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:12.384556   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.384725   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHPort
	I0108 21:56:12.384985   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:12.385175   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:12.385362   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHUsername
	I0108 21:56:12.385559   63205 main.go:141] libmachine: Using SSH client type: native
	I0108 21:56:12.386069   63205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.187 22 <nil> <nil>}
	I0108 21:56:12.386094   63205 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-716145 && echo "stopped-upgrade-716145" | sudo tee /etc/hostname
	I0108 21:56:12.533360   63205 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-716145
	
	I0108 21:56:12.533396   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHHostname
	I0108 21:56:12.536383   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.536807   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:12.536844   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.537018   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHPort
	I0108 21:56:12.537216   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:12.537357   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:12.537469   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHUsername
	I0108 21:56:12.537635   63205 main.go:141] libmachine: Using SSH client type: native
	I0108 21:56:12.538053   63205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.187 22 <nil> <nil>}
	I0108 21:56:12.538084   63205 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-716145' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-716145/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-716145' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:56:12.679162   63205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:56:12.679193   63205 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17907-10702/.minikube CaCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17907-10702/.minikube}
	I0108 21:56:12.679237   63205 buildroot.go:174] setting up certificates
	I0108 21:56:12.679255   63205 provision.go:83] configureAuth start
	I0108 21:56:12.679274   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetMachineName
	I0108 21:56:12.679580   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetIP
	I0108 21:56:12.682645   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.683084   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:12.683114   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.683298   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHHostname
	I0108 21:56:12.686124   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.686509   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:12.686554   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.686739   63205 provision.go:138] copyHostCerts
	I0108 21:56:12.686822   63205 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem, removing ...
	I0108 21:56:12.686836   63205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem
	I0108 21:56:12.686925   63205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/cert.pem (1123 bytes)
	I0108 21:56:12.687071   63205 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem, removing ...
	I0108 21:56:12.687098   63205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem
	I0108 21:56:12.687152   63205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/key.pem (1675 bytes)
	I0108 21:56:12.687255   63205 exec_runner.go:144] found /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem, removing ...
	I0108 21:56:12.687267   63205 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem
	I0108 21:56:12.687303   63205 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17907-10702/.minikube/ca.pem (1082 bytes)
	I0108 21:56:12.687380   63205 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-716145 san=[192.168.61.187 192.168.61.187 localhost 127.0.0.1 minikube stopped-upgrade-716145]
	I0108 21:56:12.873914   63205 provision.go:172] copyRemoteCerts
	I0108 21:56:12.873985   63205 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:56:12.874015   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHHostname
	I0108 21:56:12.877234   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.877630   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:12.877668   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:12.877881   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHPort
	I0108 21:56:12.878085   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:12.878235   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHUsername
	I0108 21:56:12.878375   63205 sshutil.go:53] new ssh client: &{IP:192.168.61.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/stopped-upgrade-716145/id_rsa Username:docker}
	I0108 21:56:12.976744   63205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0108 21:56:12.991887   63205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0108 21:56:13.008367   63205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:56:13.025483   63205 provision.go:86] duration metric: configureAuth took 346.211727ms
	I0108 21:56:13.025513   63205 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:56:13.025716   63205 config.go:182] Loaded profile config "stopped-upgrade-716145": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 21:56:13.025806   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHHostname
	I0108 21:56:13.029187   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:13.029533   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:13.029588   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:13.029997   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHPort
	I0108 21:56:13.030224   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:13.030460   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:13.030607   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHUsername
	I0108 21:56:13.030798   63205 main.go:141] libmachine: Using SSH client type: native
	I0108 21:56:13.031192   63205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.187 22 <nil> <nil>}
	I0108 21:56:13.031228   63205 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0108 21:56:19.774611   63205 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0108 21:56:19.774643   63205 machine.go:91] provisioned docker machine in 7.394062612s
	I0108 21:56:19.774654   63205 start.go:300] post-start starting for "stopped-upgrade-716145" (driver="kvm2")
	I0108 21:56:19.774665   63205 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:56:19.774683   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .DriverName
	I0108 21:56:19.774995   63205 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:56:19.775027   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHHostname
	I0108 21:56:19.777458   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:19.777894   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:19.777941   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:19.778110   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHPort
	I0108 21:56:19.778279   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:19.778433   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHUsername
	I0108 21:56:19.778568   63205 sshutil.go:53] new ssh client: &{IP:192.168.61.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/stopped-upgrade-716145/id_rsa Username:docker}
	I0108 21:56:19.867895   63205 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:56:19.872025   63205 info.go:137] Remote host: Buildroot 2019.02.7
	I0108 21:56:19.872048   63205 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/addons for local assets ...
	I0108 21:56:19.872131   63205 filesync.go:126] Scanning /home/jenkins/minikube-integration/17907-10702/.minikube/files for local assets ...
	I0108 21:56:19.872241   63205 filesync.go:149] local asset: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem -> 178962.pem in /etc/ssl/certs
	I0108 21:56:19.872370   63205 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:56:19.878710   63205 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/ssl/certs/178962.pem --> /etc/ssl/certs/178962.pem (1708 bytes)
	I0108 21:56:19.893083   63205 start.go:303] post-start completed in 118.416101ms
	I0108 21:56:19.893106   63205 fix.go:56] fixHost completed within 44.557809811s
	I0108 21:56:19.893127   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHHostname
	I0108 21:56:19.895784   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:19.896177   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:19.896215   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:19.896397   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHPort
	I0108 21:56:19.896595   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:19.896780   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:19.896983   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHUsername
	I0108 21:56:19.897173   63205 main.go:141] libmachine: Using SSH client type: native
	I0108 21:56:19.897495   63205 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.61.187 22 <nil> <nil>}
	I0108 21:56:19.897509   63205 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0108 21:56:20.028934   63205 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704750979.972361161
	
	I0108 21:56:20.028960   63205 fix.go:206] guest clock: 1704750979.972361161
	I0108 21:56:20.028968   63205 fix.go:219] Guest: 2024-01-08 21:56:19.972361161 +0000 UTC Remote: 2024-01-08 21:56:19.89310873 +0000 UTC m=+54.997050497 (delta=79.252431ms)
	I0108 21:56:20.028986   63205 fix.go:190] guest clock delta is within tolerance: 79.252431ms
	I0108 21:56:20.028990   63205 start.go:83] releasing machines lock for "stopped-upgrade-716145", held for 44.693724387s
	I0108 21:56:20.029020   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .DriverName
	I0108 21:56:20.029302   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetIP
	I0108 21:56:20.032502   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:20.032951   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:20.032983   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:20.033208   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .DriverName
	I0108 21:56:20.033736   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .DriverName
	I0108 21:56:20.033932   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .DriverName
	I0108 21:56:20.034046   63205 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:56:20.034094   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHHostname
	I0108 21:56:20.034163   63205 ssh_runner.go:195] Run: cat /version.json
	I0108 21:56:20.034192   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHHostname
	I0108 21:56:20.036923   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:20.037230   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:20.037259   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:20.037289   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:20.037396   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHPort
	I0108 21:56:20.037551   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:20.037703   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:ee:e3", ip: ""} in network minikube-net: {Iface:virbr3 ExpiryTime:2024-01-08 22:52:09 +0000 UTC Type:0 Mac:52:54:00:35:ee:e3 Iaid: IPaddr:192.168.61.187 Prefix:24 Hostname:stopped-upgrade-716145 Clientid:01:52:54:00:35:ee:e3}
	I0108 21:56:20.037728   63205 main.go:141] libmachine: (stopped-upgrade-716145) DBG | domain stopped-upgrade-716145 has defined IP address 192.168.61.187 and MAC address 52:54:00:35:ee:e3 in network minikube-net
	I0108 21:56:20.037739   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHUsername
	I0108 21:56:20.037920   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHPort
	I0108 21:56:20.037929   63205 sshutil.go:53] new ssh client: &{IP:192.168.61.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/stopped-upgrade-716145/id_rsa Username:docker}
	I0108 21:56:20.038057   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHKeyPath
	I0108 21:56:20.038233   63205 main.go:141] libmachine: (stopped-upgrade-716145) Calling .GetSSHUsername
	I0108 21:56:20.038386   63205 sshutil.go:53] new ssh client: &{IP:192.168.61.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/stopped-upgrade-716145/id_rsa Username:docker}
	W0108 21:56:20.152802   63205 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0108 21:56:20.152876   63205 ssh_runner.go:195] Run: systemctl --version
	I0108 21:56:20.161736   63205 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0108 21:56:20.248266   63205 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 21:56:20.254257   63205 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:56:20.254354   63205 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:56:20.260306   63205 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0108 21:56:20.260334   63205 start.go:475] detecting cgroup driver to use...
	I0108 21:56:20.260404   63205 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:56:20.272029   63205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:56:20.282491   63205 docker.go:217] disabling cri-docker service (if available) ...
	I0108 21:56:20.282562   63205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 21:56:20.291713   63205 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 21:56:20.300282   63205 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0108 21:56:20.309489   63205 docker.go:227] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0108 21:56:20.309571   63205 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 21:56:20.428267   63205 docker.go:233] disabling docker service ...
	I0108 21:56:20.428335   63205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 21:56:20.442414   63205 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 21:56:20.450900   63205 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 21:56:20.563848   63205 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 21:56:20.674900   63205 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 21:56:20.684945   63205 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:56:20.698735   63205 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0108 21:56:20.698806   63205 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0108 21:56:20.710457   63205 out.go:177] 
	W0108 21:56:20.712024   63205 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0108 21:56:20.712044   63205 out.go:239] * 
	* 
	W0108 21:56:20.712938   63205 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:56:20.715272   63205 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-716145 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (283.80s)

                                                
                                    

Test pass (232/298)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 55.38
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 42.06
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.2/json-events 43.38
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.15
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
26 TestBinaryMirror 0.58
27 TestOffline 105.6
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 218.19
34 TestAddons/parallel/Registry 17.9
36 TestAddons/parallel/InspektorGadget 11.13
37 TestAddons/parallel/MetricsServer 7.02
38 TestAddons/parallel/HelmTiller 30.1
40 TestAddons/parallel/CSI 69.95
41 TestAddons/parallel/Headlamp 39.38
42 TestAddons/parallel/CloudSpanner 7.2
43 TestAddons/parallel/LocalPath 59.63
44 TestAddons/parallel/NvidiaDevicePlugin 5.75
45 TestAddons/parallel/Yakd 6.01
48 TestAddons/serial/GCPAuth/Namespaces 0.13
50 TestCertOptions 103.7
51 TestCertExpiration 438.43
53 TestForceSystemdFlag 50.89
54 TestForceSystemdEnv 71.7
56 TestKVMDriverInstallOrUpdate 5.16
60 TestErrorSpam/setup 47.07
61 TestErrorSpam/start 0.4
62 TestErrorSpam/status 0.81
63 TestErrorSpam/pause 1.65
64 TestErrorSpam/unpause 1.77
65 TestErrorSpam/stop 2.27
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 95.19
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 36.77
72 TestFunctional/serial/KubeContext 0.05
73 TestFunctional/serial/KubectlGetPods 0.08
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
77 TestFunctional/serial/CacheCmd/cache/add_local 2.31
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
79 TestFunctional/serial/CacheCmd/cache/list 0.06
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.76
82 TestFunctional/serial/CacheCmd/cache/delete 0.12
83 TestFunctional/serial/MinikubeKubectlCmd 0.13
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
85 TestFunctional/serial/ExtraConfig 36.44
86 TestFunctional/serial/ComponentHealth 0.07
87 TestFunctional/serial/LogsCmd 1.68
88 TestFunctional/serial/LogsFileCmd 1.7
89 TestFunctional/serial/InvalidService 4.7
91 TestFunctional/parallel/ConfigCmd 0.43
92 TestFunctional/parallel/DashboardCmd 28.3
93 TestFunctional/parallel/DryRun 0.36
94 TestFunctional/parallel/InternationalLanguage 0.17
95 TestFunctional/parallel/StatusCmd 1.33
99 TestFunctional/parallel/ServiceCmdConnect 8.65
100 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/PersistentVolumeClaim 51.84
103 TestFunctional/parallel/SSHCmd 0.49
104 TestFunctional/parallel/CpCmd 1.55
105 TestFunctional/parallel/MySQL 41.49
106 TestFunctional/parallel/FileSync 0.29
107 TestFunctional/parallel/CertSync 1.92
111 TestFunctional/parallel/NodeLabels 0.07
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
115 TestFunctional/parallel/License 0.65
116 TestFunctional/parallel/ServiceCmd/DeployApp 13.23
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
118 TestFunctional/parallel/MountCmd/any-port 13.2
119 TestFunctional/parallel/ProfileCmd/profile_list 0.36
120 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
124 TestFunctional/parallel/Version/short 0.07
125 TestFunctional/parallel/Version/components 1.05
126 TestFunctional/parallel/ServiceCmd/List 0.53
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
128 TestFunctional/parallel/MountCmd/specific-port 1.79
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
130 TestFunctional/parallel/ServiceCmd/Format 0.36
131 TestFunctional/parallel/ServiceCmd/URL 0.36
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.37
142 TestFunctional/parallel/ImageCommands/ImageListShort 0.37
143 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
144 TestFunctional/parallel/ImageCommands/ImageListJson 0.57
145 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
146 TestFunctional/parallel/ImageCommands/ImageBuild 4.59
147 TestFunctional/parallel/ImageCommands/Setup 2.23
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.95
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.92
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.64
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.81
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.37
155 TestFunctional/delete_addon-resizer_images 0.07
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestIngressAddonLegacy/StartLegacyK8sCluster 123.22
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.66
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.61
168 TestJSONOutput/start/Command 99.97
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.75
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.69
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 7.11
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.22
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 103.48
200 TestMountStart/serial/StartWithMountFirst 28.83
201 TestMountStart/serial/VerifyMountFirst 0.42
202 TestMountStart/serial/StartWithMountSecond 26.16
203 TestMountStart/serial/VerifyMountSecond 0.4
204 TestMountStart/serial/DeleteFirst 0.69
205 TestMountStart/serial/VerifyMountPostDelete 0.41
206 TestMountStart/serial/Stop 1.22
207 TestMountStart/serial/RestartStopped 25.66
208 TestMountStart/serial/VerifyMountPostStop 0.4
211 TestMultiNode/serial/FreshStart2Nodes 169.66
212 TestMultiNode/serial/DeployApp2Nodes 6.79
214 TestMultiNode/serial/AddNode 45.56
215 TestMultiNode/serial/MultiNodeLabels 0.07
216 TestMultiNode/serial/ProfileList 0.24
217 TestMultiNode/serial/CopyFile 7.92
218 TestMultiNode/serial/StopNode 3.01
219 TestMultiNode/serial/StartAfterStop 32.63
221 TestMultiNode/serial/DeleteNode 1.6
223 TestMultiNode/serial/RestartMultiNode 438.55
224 TestMultiNode/serial/ValidateNameConflict 52.39
231 TestScheduledStopUnix 118.26
237 TestKubernetesUpgrade 344.94
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
249 TestStartStop/group/old-k8s-version/serial/FirstStart 159.18
250 TestNoKubernetes/serial/StartWithK8s 106.02
254 TestNoKubernetes/serial/StartWithStopK8s 7.67
259 TestNetworkPlugins/group/false 4.01
263 TestNoKubernetes/serial/Start 30.42
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
265 TestNoKubernetes/serial/ProfileList 0.78
266 TestNoKubernetes/serial/Stop 1.34
267 TestNoKubernetes/serial/StartNoArgs 44.49
268 TestStartStop/group/old-k8s-version/serial/DeployApp 10.5
269 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.12
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
273 TestPause/serial/Start 97.67
275 TestStartStop/group/no-preload/serial/FirstStart 136.77
277 TestStartStop/group/old-k8s-version/serial/SecondStart 1010.27
279 TestStartStop/group/no-preload/serial/DeployApp 11.3
280 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
283 TestStartStop/group/no-preload/serial/SecondStart 992.82
285 TestStartStop/group/embed-certs/serial/FirstStart 342.22
287 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 379.1
288 TestStartStop/group/embed-certs/serial/DeployApp 11.37
289 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.47
291 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.32
292 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.21
295 TestStartStop/group/embed-certs/serial/SecondStart 618.1
297 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 528.59
303 TestStartStop/group/newest-cni/serial/FirstStart 59.44
304 TestStartStop/group/newest-cni/serial/DeployApp 0
305 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.52
308 TestStartStop/group/newest-cni/serial/SecondStart 360.71
311 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
312 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
313 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
314 TestStartStop/group/newest-cni/serial/Pause 2.92
315 TestStoppedBinaryUpgrade/Setup 1.9
317 TestNetworkPlugins/group/auto/Start 103.22
318 TestNetworkPlugins/group/kindnet/Start 87.49
319 TestNetworkPlugins/group/calico/Start 124.9
320 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
321 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
322 TestNetworkPlugins/group/auto/KubeletFlags 0.25
323 TestNetworkPlugins/group/kindnet/NetCatPod 12.36
324 TestNetworkPlugins/group/auto/NetCatPod 12.35
325 TestNetworkPlugins/group/kindnet/DNS 0.23
326 TestNetworkPlugins/group/auto/DNS 0.25
327 TestNetworkPlugins/group/kindnet/Localhost 0.19
328 TestNetworkPlugins/group/auto/Localhost 0.2
329 TestNetworkPlugins/group/kindnet/HairPin 0.19
330 TestNetworkPlugins/group/auto/HairPin 0.18
331 TestNetworkPlugins/group/custom-flannel/Start 85.17
332 TestNetworkPlugins/group/enable-default-cni/Start 128.51
333 TestNetworkPlugins/group/calico/ControllerPod 6.01
334 TestNetworkPlugins/group/calico/KubeletFlags 0.23
335 TestNetworkPlugins/group/calico/NetCatPod 13.33
336 TestNetworkPlugins/group/calico/DNS 0.31
337 TestNetworkPlugins/group/calico/Localhost 0.3
338 TestNetworkPlugins/group/calico/HairPin 0.21
339 TestNetworkPlugins/group/flannel/Start 133.6
340 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
341 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
342 TestNetworkPlugins/group/custom-flannel/DNS 0.21
343 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
344 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
345 TestStoppedBinaryUpgrade/MinikubeLogs 0.49
346 TestNetworkPlugins/group/bridge/Start 128.9
347 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
348 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.32
349 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
350 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
351 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
352 TestNetworkPlugins/group/flannel/ControllerPod 6.01
353 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
354 TestNetworkPlugins/group/flannel/NetCatPod 11.25
355 TestNetworkPlugins/group/flannel/DNS 0.19
356 TestNetworkPlugins/group/flannel/Localhost 0.16
357 TestNetworkPlugins/group/flannel/HairPin 0.15
358 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
359 TestNetworkPlugins/group/bridge/NetCatPod 11.25
360 TestNetworkPlugins/group/bridge/DNS 0.17
361 TestNetworkPlugins/group/bridge/Localhost 0.14
362 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (55.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-761857 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-761857 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (55.379307894s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (55.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-761857
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-761857: exit status 85 (79.170124ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-761857 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-761857        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:09:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:09:36.177498   17908 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:09:36.177735   17908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:36.177744   17908 out.go:309] Setting ErrFile to fd 2...
	I0108 20:09:36.177749   17908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:09:36.177943   17908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	W0108 20:09:36.178047   17908 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-10702/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-10702/.minikube/config/config.json: no such file or directory
	I0108 20:09:36.178625   17908 out.go:303] Setting JSON to true
	I0108 20:09:36.179445   17908 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3100,"bootTime":1704741476,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:09:36.179508   17908 start.go:138] virtualization: kvm guest
	I0108 20:09:36.182211   17908 out.go:97] [download-only-761857] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:09:36.184226   17908 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:09:36.182319   17908 notify.go:220] Checking for updates...
	W0108 20:09:36.182323   17908 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 20:09:36.187643   17908 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:09:36.189539   17908 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:09:36.191174   17908 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:09:36.192681   17908 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 20:09:36.195351   17908 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:09:36.195570   17908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:09:36.693585   17908 out.go:97] Using the kvm2 driver based on user configuration
	I0108 20:09:36.693612   17908 start.go:298] selected driver: kvm2
	I0108 20:09:36.693619   17908 start.go:902] validating driver "kvm2" against <nil>
	I0108 20:09:36.693923   17908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:09:36.694040   17908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 20:09:36.708806   17908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 20:09:36.708865   17908 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 20:09:36.709334   17908 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0108 20:09:36.709502   17908 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 20:09:36.709558   17908 cni.go:84] Creating CNI manager for ""
	I0108 20:09:36.709573   17908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 20:09:36.709582   17908 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 20:09:36.709590   17908 start_flags.go:323] config:
	{Name:download-only-761857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-761857 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:09:36.709779   17908 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:09:36.711945   17908 out.go:97] Downloading VM boot image ...
	I0108 20:09:36.711981   17908 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 20:09:46.984584   17908 out.go:97] Starting control plane node download-only-761857 in cluster download-only-761857
	I0108 20:09:46.984608   17908 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 20:09:47.091166   17908 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0108 20:09:47.091213   17908 cache.go:56] Caching tarball of preloaded images
	I0108 20:09:47.091364   17908 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 20:09:47.093449   17908 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 20:09:47.093470   17908 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:09:47.209976   17908 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0108 20:10:04.562700   17908 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:10:04.562809   17908 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:10:05.471396   17908 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0108 20:10:05.471781   17908 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/download-only-761857/config.json ...
	I0108 20:10:05.471818   17908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/download-only-761857/config.json: {Name:mk4bf93e35fb8813c58def0fb7cd22fde9258c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 20:10:05.472000   17908 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0108 20:10:05.472220   17908 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-761857"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (42.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-761857 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-761857 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (42.054703095s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (42.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-761857
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-761857: exit status 85 (75.787383ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-761857 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-761857        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-761857 | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |          |
	|         | -p download-only-761857        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:10:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:10:31.638618   18097 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:10:31.638777   18097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:31.638788   18097 out.go:309] Setting ErrFile to fd 2...
	I0108 20:10:31.638795   18097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:10:31.638997   18097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	W0108 20:10:31.639116   18097 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-10702/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-10702/.minikube/config/config.json: no such file or directory
	I0108 20:10:31.639551   18097 out.go:303] Setting JSON to true
	I0108 20:10:31.640369   18097 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3156,"bootTime":1704741476,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:10:31.640434   18097 start.go:138] virtualization: kvm guest
	I0108 20:10:31.643224   18097 out.go:97] [download-only-761857] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:10:31.645362   18097 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:10:31.643456   18097 notify.go:220] Checking for updates...
	I0108 20:10:31.649045   18097 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:10:31.650750   18097 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:10:31.652470   18097 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:10:31.654147   18097 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 20:10:31.657677   18097 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:10:31.658150   18097 config.go:182] Loaded profile config "download-only-761857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0108 20:10:31.658200   18097 start.go:810] api.Load failed for download-only-761857: filestore "download-only-761857": Docker machine "download-only-761857" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:10:31.658275   18097 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 20:10:31.658311   18097 start.go:810] api.Load failed for download-only-761857: filestore "download-only-761857": Docker machine "download-only-761857" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:10:31.693001   18097 out.go:97] Using the kvm2 driver based on existing profile
	I0108 20:10:31.693035   18097 start.go:298] selected driver: kvm2
	I0108 20:10:31.693041   18097 start.go:902] validating driver "kvm2" against &{Name:download-only-761857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-761857 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:10:31.693423   18097 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:10:31.693504   18097 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 20:10:31.708431   18097 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 20:10:31.709189   18097 cni.go:84] Creating CNI manager for ""
	I0108 20:10:31.709207   18097 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 20:10:31.709219   18097 start_flags.go:323] config:
	{Name:download-only-761857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-761857 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:10:31.709370   18097 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:10:31.711349   18097 out.go:97] Starting control plane node download-only-761857 in cluster download-only-761857
	I0108 20:10:31.711363   18097 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:10:32.209776   18097 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 20:10:32.209813   18097 cache.go:56] Caching tarball of preloaded images
	I0108 20:10:32.209972   18097 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:10:32.212433   18097 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 20:10:32.212462   18097 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:10:32.326582   18097 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0108 20:10:48.139264   18097 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:10:48.139370   18097 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:10:49.081171   18097 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0108 20:10:49.081314   18097 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/download-only-761857/config.json ...
	I0108 20:10:49.081552   18097 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0108 20:10:49.081765   18097 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-761857"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (43.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-761857 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-761857 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (43.376201211s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (43.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-761857
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-761857: exit status 85 (76.400078ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-761857 | jenkins | v1.32.0 | 08 Jan 24 20:09 UTC |          |
	|         | -p download-only-761857           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-761857 | jenkins | v1.32.0 | 08 Jan 24 20:10 UTC |          |
	|         | -p download-only-761857           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-761857 | jenkins | v1.32.0 | 08 Jan 24 20:11 UTC |          |
	|         | -p download-only-761857           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:11:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:11:13.774104   18221 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:11:13.774348   18221 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:11:13.774356   18221 out.go:309] Setting ErrFile to fd 2...
	I0108 20:11:13.774360   18221 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:11:13.774519   18221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	W0108 20:11:13.774622   18221 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17907-10702/.minikube/config/config.json: open /home/jenkins/minikube-integration/17907-10702/.minikube/config/config.json: no such file or directory
	I0108 20:11:13.775016   18221 out.go:303] Setting JSON to true
	I0108 20:11:13.775817   18221 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3198,"bootTime":1704741476,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:11:13.775878   18221 start.go:138] virtualization: kvm guest
	I0108 20:11:13.778105   18221 out.go:97] [download-only-761857] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:11:13.779665   18221 out.go:169] MINIKUBE_LOCATION=17907
	I0108 20:11:13.778277   18221 notify.go:220] Checking for updates...
	I0108 20:11:13.782621   18221 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:11:13.784199   18221 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:11:13.785780   18221 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:11:13.787317   18221 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 20:11:13.790068   18221 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:11:13.790535   18221 config.go:182] Loaded profile config "download-only-761857": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W0108 20:11:13.790589   18221 start.go:810] api.Load failed for download-only-761857: filestore "download-only-761857": Docker machine "download-only-761857" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:11:13.790675   18221 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 20:11:13.790713   18221 start.go:810] api.Load failed for download-only-761857: filestore "download-only-761857": Docker machine "download-only-761857" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:11:13.822701   18221 out.go:97] Using the kvm2 driver based on existing profile
	I0108 20:11:13.822737   18221 start.go:298] selected driver: kvm2
	I0108 20:11:13.822743   18221 start.go:902] validating driver "kvm2" against &{Name:download-only-761857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-761857 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:11:13.823194   18221 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:11:13.823269   18221 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17907-10702/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 20:11:13.837635   18221 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 20:11:13.838418   18221 cni.go:84] Creating CNI manager for ""
	I0108 20:11:13.838436   18221 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0108 20:11:13.838449   18221 start_flags.go:323] config:
	{Name:download-only-761857 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-761857 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:11:13.838612   18221 iso.go:125] acquiring lock: {Name:mkee485140f2a2ab6b7a0bb876055a3814a537d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:11:13.840699   18221 out.go:97] Starting control plane node download-only-761857 in cluster download-only-761857
	I0108 20:11:13.840718   18221 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 20:11:14.027859   18221 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 20:11:14.027894   18221 cache.go:56] Caching tarball of preloaded images
	I0108 20:11:14.028157   18221 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 20:11:14.030592   18221 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 20:11:14.030616   18221 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:11:14.141026   18221 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:2e182f4d7475b49e22eaf15ea22c281b -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0108 20:11:26.618035   18221 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:11:26.618124   18221 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17907-10702/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0108 20:11:27.436796   18221 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0108 20:11:27.436921   18221 profile.go:148] Saving config to /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/download-only-761857/config.json ...
	I0108 20:11:27.437131   18221 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0108 20:11:27.437316   18221 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17907-10702/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-761857"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-761857
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-115177 --alsologtostderr --binary-mirror http://127.0.0.1:42657 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-115177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-115177
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (105.6s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-588634 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-588634 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m44.463502245s)
helpers_test.go:175: Cleaning up "offline-crio-588634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-588634
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-588634: (1.139223576s)
--- PASS: TestOffline (105.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-117367
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-117367: exit status 85 (65.93982ms)

                                                
                                                
-- stdout --
	* Profile "addons-117367" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-117367"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-117367
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-117367: exit status 85 (64.812114ms)

                                                
                                                
-- stdout --
	* Profile "addons-117367" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-117367"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (218.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-117367 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-117367 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m38.18911229s)
--- PASS: TestAddons/Setup (218.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 31.467842ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9k4wl" [82d27468-3946-478f-825b-521282fc7a92] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005056963s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q8br6" [6409afa0-82bf-4dc2-b033-0803a7132987] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010197246s
addons_test.go:340: (dbg) Run:  kubectl --context addons-117367 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-117367 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-117367 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.839196517s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 ip
2024/01/08 20:15:53 [DEBUG] GET http://192.168.39.205:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gdl9d" [baf47de0-320f-403a-a567-f3d0c97241cf] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005117854s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-117367
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-117367: (6.125865134s)
--- PASS: TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 11.909891ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-8fbhz" [e9216c24-02bb-430f-9649-eaaf8f8b8782] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.011975968s
addons_test.go:415: (dbg) Run:  kubectl --context addons-117367 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.02s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.1s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.135268ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-j2j8k" [78b46de8-f390-41b9-ade6-b1ad3f35307f] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007565206s
addons_test.go:473: (dbg) Run:  kubectl --context addons-117367 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-117367 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (24.402180331s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (30.10s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 32.762966ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-117367 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-117367 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4242ea64-4c28-4436-ac98-a5d3373b6443] Pending
helpers_test.go:344: "task-pv-pod" [4242ea64-4c28-4436-ac98-a5d3373b6443] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4242ea64-4c28-4436-ac98-a5d3373b6443] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.004416076s
addons_test.go:584: (dbg) Run:  kubectl --context addons-117367 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-117367 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-117367 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-117367 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-117367 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-117367 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-117367 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [784b0897-356d-4dbd-a2f5-1f25d04e1e4c] Pending
helpers_test.go:344: "task-pv-pod-restore" [784b0897-356d-4dbd-a2f5-1f25d04e1e4c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [784b0897-356d-4dbd-a2f5-1f25d04e1e4c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.00819353s
addons_test.go:626: (dbg) Run:  kubectl --context addons-117367 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-117367 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-117367 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-117367 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.399031409s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (69.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (39.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-117367 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-117367 --alsologtostderr -v=1: (3.375140466s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-8zh4p" [fc6d0204-105e-4658-9084-c38094972eb7] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-8zh4p" [fc6d0204-105e-4658-9084-c38094972eb7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-8zh4p" [fc6d0204-105e-4658-9084-c38094972eb7] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 36.004394341s
--- PASS: TestAddons/parallel/Headlamp (39.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-dhs8z" [e564071b-5e08-4d7d-b11e-9b284e7dffd6] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.01922832s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-117367
addons_test.go:860: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-117367: (1.164664103s)
--- PASS: TestAddons/parallel/CloudSpanner (7.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.63s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-117367 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-117367 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-117367 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7b716a14-153e-43fb-9dd3-2f2ac89a1a3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7b716a14-153e-43fb-9dd3-2f2ac89a1a3e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7b716a14-153e-43fb-9dd3-2f2ac89a1a3e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.00565279s
addons_test.go:891: (dbg) Run:  kubectl --context addons-117367 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 ssh "cat /opt/local-path-provisioner/pvc-c8a6c247-5d06-4b89-8f77-d084297eda51_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-117367 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-117367 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-117367 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-117367 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.6933567s)
--- PASS: TestAddons/parallel/LocalPath (59.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4czzg" [a6533da4-4d13-468b-9ddd-3aa8940ce37b] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.007777336s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-117367
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-f94t4" [2775147b-f7b1-4b1f-9010-63889a274022] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00542482s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-117367 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-117367 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (103.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-686681 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0108 21:15:19.481568   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 21:15:36.429427   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-686681 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m42.192097654s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-686681 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-686681 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-686681 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-686681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-686681
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-686681: (1.031942662s)
--- PASS: TestCertOptions (103.70s)

                                                
                                    
x
+
TestCertExpiration (438.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-001550 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-001550 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m17.330806279s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-001550 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0108 21:19:26.820146   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 21:20:36.429169   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 21:20:47.564461   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 21:21:04.516745   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-001550 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (3m0.040952087s)
helpers_test.go:175: Cleaning up "cert-expiration-001550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-001550
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-001550: (1.053068384s)
--- PASS: TestCertExpiration (438.43s)

                                                
                                    
x
+
TestForceSystemdFlag (50.89s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-162170 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-162170 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (49.677777499s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-162170 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-162170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-162170
--- PASS: TestForceSystemdFlag (50.89s)

                                                
                                    
x
+
TestForceSystemdEnv (71.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-467534 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-467534 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.629366295s)
helpers_test.go:175: Cleaning up "force-systemd-env-467534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-467534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-467534: (2.069512607s)
--- PASS: TestForceSystemdEnv (71.70s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.16s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.16s)

                                                
                                    
x
+
TestErrorSpam/setup (47.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-412923 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-412923 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-412923 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-412923 --driver=kvm2  --container-runtime=crio: (47.068983367s)
--- PASS: TestErrorSpam/setup (47.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 unpause
--- PASS: TestErrorSpam/unpause (1.77s)

                                                
                                    
x
+
TestErrorSpam/stop (2.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 stop: (2.094564157s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-412923 --log_dir /tmp/nospam-412923 stop
--- PASS: TestErrorSpam/stop (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17907-10702/.minikube/files/etc/test/nested/copy/17896/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (95.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776422 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-776422 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m35.1879146s)
--- PASS: TestFunctional/serial/StartWithProxy (95.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.77s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776422 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-776422 --alsologtostderr -v=8: (36.771895121s)
functional_test.go:659: soft start took 36.77259291s for "functional-776422" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.77s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-776422 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 cache add registry.k8s.io/pause:3.1: (1.129946731s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 cache add registry.k8s.io/pause:3.3: (1.12527278s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 cache add registry.k8s.io/pause:latest: (1.140278146s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-776422 /tmp/TestFunctionalserialCacheCmdcacheadd_local3021551443/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 cache add minikube-local-cache-test:functional-776422
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 cache add minikube-local-cache-test:functional-776422: (1.941813454s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 cache delete minikube-local-cache-test:functional-776422
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-776422
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776422 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (245.312579ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 kubectl -- --context functional-776422 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-776422 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776422 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 20:25:36.430190   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:25:36.435851   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:25:36.446217   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:25:36.466573   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:25:36.506782   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:25:36.587203   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:25:36.747716   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:25:37.068385   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:25:37.709294   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:25:38.989609   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:25:41.550470   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:25:46.670774   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-776422 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.440280909s)
functional_test.go:757: restart took 36.440393045s for "functional-776422" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-776422 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 logs
E0108 20:25:56.910909   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 logs: (1.681671324s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 logs --file /tmp/TestFunctionalserialLogsFileCmd4145809806/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 logs --file /tmp/TestFunctionalserialLogsFileCmd4145809806/001/logs.txt: (1.700839214s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-776422 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-776422
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-776422: exit status 115 (323.760808ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.91:32044 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-776422 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-776422 delete -f testdata/invalidsvc.yaml: (1.16377984s)
--- PASS: TestFunctional/serial/InvalidService (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776422 config get cpus: exit status 14 (82.419544ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776422 config get cpus: exit status 14 (62.217128ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-776422 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-776422 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25593: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776422 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-776422 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (187.192449ms)

                                                
                                                
-- stdout --
	* [functional-776422] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:26:06.092984   25093 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:26:06.093321   25093 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:26:06.093336   25093 out.go:309] Setting ErrFile to fd 2...
	I0108 20:26:06.093344   25093 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:26:06.093619   25093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 20:26:06.094355   25093 out.go:303] Setting JSON to false
	I0108 20:26:06.095621   25093 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4090,"bootTime":1704741476,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:26:06.095717   25093 start.go:138] virtualization: kvm guest
	I0108 20:26:06.098287   25093 out.go:177] * [functional-776422] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:26:06.100427   25093 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:26:06.100392   25093 notify.go:220] Checking for updates...
	I0108 20:26:06.101990   25093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:26:06.103761   25093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:26:06.105323   25093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:26:06.106738   25093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:26:06.108170   25093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:26:06.110098   25093 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:26:06.110893   25093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:26:06.111014   25093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:26:06.129306   25093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33669
	I0108 20:26:06.129927   25093 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:26:06.130609   25093 main.go:141] libmachine: Using API Version  1
	I0108 20:26:06.130639   25093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:26:06.131006   25093 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:26:06.131160   25093 main.go:141] libmachine: (functional-776422) Calling .DriverName
	I0108 20:26:06.131343   25093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:26:06.131620   25093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:26:06.131657   25093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:26:06.147935   25093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36207
	I0108 20:26:06.148398   25093 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:26:06.148872   25093 main.go:141] libmachine: Using API Version  1
	I0108 20:26:06.148890   25093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:26:06.149206   25093 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:26:06.149363   25093 main.go:141] libmachine: (functional-776422) Calling .DriverName
	I0108 20:26:06.193625   25093 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 20:26:06.195632   25093 start.go:298] selected driver: kvm2
	I0108 20:26:06.195653   25093 start.go:902] validating driver "kvm2" against &{Name:functional-776422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-776422 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.91 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:26:06.195805   25093 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:26:06.198475   25093 out.go:177] 
	W0108 20:26:06.199942   25093 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 20:26:06.201245   25093 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776422 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-776422 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-776422 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (171.260267ms)

                                                
                                                
-- stdout --
	* [functional-776422] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:26:05.911228   25033 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:26:05.911364   25033 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:26:05.911373   25033 out.go:309] Setting ErrFile to fd 2...
	I0108 20:26:05.911378   25033 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:26:05.911670   25033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 20:26:05.912261   25033 out.go:303] Setting JSON to false
	I0108 20:26:05.913176   25033 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4090,"bootTime":1704741476,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:26:05.913239   25033 start.go:138] virtualization: kvm guest
	I0108 20:26:05.916035   25033 out.go:177] * [functional-776422] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0108 20:26:05.917953   25033 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 20:26:05.917973   25033 notify.go:220] Checking for updates...
	I0108 20:26:05.919812   25033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:26:05.921596   25033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 20:26:05.923053   25033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 20:26:05.924498   25033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:26:05.925848   25033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:26:05.927635   25033 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:26:05.928278   25033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:26:05.928358   25033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:26:05.946293   25033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
	I0108 20:26:05.946687   25033 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:26:05.947284   25033 main.go:141] libmachine: Using API Version  1
	I0108 20:26:05.947307   25033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:26:05.947647   25033 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:26:05.947794   25033 main.go:141] libmachine: (functional-776422) Calling .DriverName
	I0108 20:26:05.948021   25033 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:26:05.948573   25033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:26:05.948738   25033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:26:05.966756   25033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0108 20:26:05.967194   25033 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:26:05.967656   25033 main.go:141] libmachine: Using API Version  1
	I0108 20:26:05.967684   25033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:26:05.967994   25033 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:26:05.968209   25033 main.go:141] libmachine: (functional-776422) Calling .DriverName
	I0108 20:26:06.003243   25033 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0108 20:26:06.005025   25033 start.go:298] selected driver: kvm2
	I0108 20:26:06.005053   25033 start.go:902] validating driver "kvm2" against &{Name:functional-776422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703498848-17857@sha256:81ae12a49915e4f02aa382dd3758a30a6649e1143c32b3d03309750104577c6c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-776422 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.91 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 20:26:06.005150   25033 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:26:06.008072   25033 out.go:177] 
	W0108 20:26:06.011181   25033 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 20:26:06.013434   25033 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-776422 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-776422 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-ctdsv" [fd431bf3-82c4-4037-87ac-374c152671a5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-ctdsv" [fd431bf3-82c4-4037-87ac-374c152671a5] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.056589043s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.91:32290
functional_test.go:1674: http://192.168.50.91:32290: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-ctdsv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.91:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.91:32290
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0e7f1aa4-72e7-4e8a-abfb-f2e4e7ff60a2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00597596s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-776422 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-776422 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-776422 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-776422 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-776422 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e105c46d-4b5d-4f9a-a8db-d3b9b244d51c] Pending
helpers_test.go:344: "sp-pod" [e105c46d-4b5d-4f9a-a8db-d3b9b244d51c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e105c46d-4b5d-4f9a-a8db-d3b9b244d51c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004784779s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-776422 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-776422 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-776422 delete -f testdata/storage-provisioner/pod.yaml: (4.512705283s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-776422 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f9ba079f-6491-459a-9e95-6e8d2fad35b2] Pending
helpers_test.go:344: "sp-pod" [f9ba079f-6491-459a-9e95-6e8d2fad35b2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f9ba079f-6491-459a-9e95-6e8d2fad35b2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.005691278s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-776422 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh -n functional-776422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 cp functional-776422:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1090685918/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh -n functional-776422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh -n functional-776422 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (41.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-776422 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-vs9q4" [a219f8c1-5910-4c2c-a15b-95005cafb601] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-vs9q4" [a219f8c1-5910-4c2c-a15b-95005cafb601] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 38.007537442s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-776422 exec mysql-859648c796-vs9q4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-776422 exec mysql-859648c796-vs9q4 -- mysql -ppassword -e "show databases;": exit status 1 (393.412544ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-776422 exec mysql-859648c796-vs9q4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-776422 exec mysql-859648c796-vs9q4 -- mysql -ppassword -e "show databases;": exit status 1 (157.013155ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-776422 exec mysql-859648c796-vs9q4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (41.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/17896/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "sudo cat /etc/test/nested/copy/17896/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/17896.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "sudo cat /etc/ssl/certs/17896.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/17896.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "sudo cat /usr/share/ca-certificates/17896.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/178962.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "sudo cat /etc/ssl/certs/178962.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/178962.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "sudo cat /usr/share/ca-certificates/178962.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-776422 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776422 ssh "sudo systemctl is-active docker": exit status 1 (262.025322ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776422 ssh "sudo systemctl is-active containerd": exit status 1 (309.70504ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-776422 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-776422 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-4hjq8" [a81f32f1-ab20-4968-acbc-511006f55107] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-4hjq8" [a81f32f1-ab20-4968-acbc-511006f55107] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.00481848s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-776422 /tmp/TestFunctionalparallelMountCmdany-port887807377/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704745564942520079" to /tmp/TestFunctionalparallelMountCmdany-port887807377/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704745564942520079" to /tmp/TestFunctionalparallelMountCmdany-port887807377/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704745564942520079" to /tmp/TestFunctionalparallelMountCmdany-port887807377/001/test-1704745564942520079
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (250.417629ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 20:26 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 20:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 20:26 test-1704745564942520079
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh cat /mount-9p/test-1704745564942520079
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-776422 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e3ecf3ff-bc99-486f-bd35-bb0901c31d5a] Pending
helpers_test.go:344: "busybox-mount" [e3ecf3ff-bc99-486f-bd35-bb0901c31d5a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e3ecf3ff-bc99-486f-bd35-bb0901c31d5a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e3ecf3ff-bc99-486f-bd35-bb0901c31d5a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.004458451s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-776422 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh stat /mount-9p/created-by-pod
E0108 20:26:17.391036   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776422 /tmp/TestFunctionalparallelMountCmdany-port887807377/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "288.667821ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "66.57438ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "267.030161ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "61.391176ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 version -o=json --components: (1.05045854s)
--- PASS: TestFunctional/parallel/Version/components (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 service list -o json
functional_test.go:1493: Took "512.081349ms" to run "out/minikube-linux-amd64 -p functional-776422 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-776422 /tmp/TestFunctionalparallelMountCmdspecific-port3457824882/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776422 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (313.72135ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776422 /tmp/TestFunctionalparallelMountCmdspecific-port3457824882/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776422 ssh "sudo umount -f /mount-9p": exit status 1 (220.51904ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-776422 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776422 /tmp/TestFunctionalparallelMountCmdspecific-port3457824882/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.91:31915
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.91:31915
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-776422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1341246649/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-776422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1341246649/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-776422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1341246649/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776422 ssh "findmnt -T" /mount1: exit status 1 (267.926752ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-776422 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1341246649/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1341246649/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-776422 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1341246649/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-776422 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-776422
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-776422
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-776422 image ls --format short --alsologtostderr:
I0108 20:26:58.168142   27067 out.go:296] Setting OutFile to fd 1 ...
I0108 20:26:58.168280   27067 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:26:58.168293   27067 out.go:309] Setting ErrFile to fd 2...
I0108 20:26:58.168301   27067 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:26:58.168598   27067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
I0108 20:26:58.169414   27067 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:26:58.169574   27067 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:26:58.170139   27067 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 20:26:58.170190   27067 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:26:58.185356   27067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
I0108 20:26:58.185819   27067 main.go:141] libmachine: () Calling .GetVersion
I0108 20:26:58.186368   27067 main.go:141] libmachine: Using API Version  1
I0108 20:26:58.186390   27067 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:26:58.186786   27067 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:26:58.187005   27067 main.go:141] libmachine: (functional-776422) Calling .GetState
I0108 20:26:58.190005   27067 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 20:26:58.190048   27067 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:26:58.203879   27067 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
I0108 20:26:58.204208   27067 main.go:141] libmachine: () Calling .GetVersion
I0108 20:26:58.204613   27067 main.go:141] libmachine: Using API Version  1
I0108 20:26:58.204633   27067 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:26:58.204918   27067 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:26:58.205101   27067 main.go:141] libmachine: (functional-776422) Calling .DriverName
I0108 20:26:58.205339   27067 ssh_runner.go:195] Run: systemctl --version
I0108 20:26:58.205372   27067 main.go:141] libmachine: (functional-776422) Calling .GetSSHHostname
I0108 20:26:58.208649   27067 main.go:141] libmachine: (functional-776422) DBG | domain functional-776422 has defined MAC address 52:54:00:fb:46:ab in network mk-functional-776422
I0108 20:26:58.209012   27067 main.go:141] libmachine: (functional-776422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:46:ab", ip: ""} in network mk-functional-776422: {Iface:virbr1 ExpiryTime:2024-01-08 21:23:15 +0000 UTC Type:0 Mac:52:54:00:fb:46:ab Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:functional-776422 Clientid:01:52:54:00:fb:46:ab}
I0108 20:26:58.209057   27067 main.go:141] libmachine: (functional-776422) DBG | domain functional-776422 has defined IP address 192.168.50.91 and MAC address 52:54:00:fb:46:ab in network mk-functional-776422
I0108 20:26:58.209297   27067 main.go:141] libmachine: (functional-776422) Calling .GetSSHPort
I0108 20:26:58.209461   27067 main.go:141] libmachine: (functional-776422) Calling .GetSSHKeyPath
I0108 20:26:58.209618   27067 main.go:141] libmachine: (functional-776422) Calling .GetSSHUsername
I0108 20:26:58.209745   27067 sshutil.go:53] new ssh client: &{IP:192.168.50.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/functional-776422/id_rsa Username:docker}
I0108 20:26:58.328721   27067 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 20:26:58.464309   27067 main.go:141] libmachine: Making call to close driver server
I0108 20:26:58.464326   27067 main.go:141] libmachine: (functional-776422) Calling .Close
I0108 20:26:58.464610   27067 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:26:58.464625   27067 main.go:141] libmachine: (functional-776422) DBG | Closing plugin on server side
I0108 20:26:58.464628   27067 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:26:58.464650   27067 main.go:141] libmachine: Making call to close driver server
I0108 20:26:58.464659   27067 main.go:141] libmachine: (functional-776422) Calling .Close
I0108 20:26:58.464879   27067 main.go:141] libmachine: (functional-776422) DBG | Closing plugin on server side
I0108 20:26:58.464908   27067 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:26:58.464923   27067 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-776422 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | d453dd892d935 | 191MB  |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| gcr.io/google-containers/addon-resizer  | functional-776422  | ffd4cfbbe753e | 34.1MB |
| localhost/minikube-local-cache-test     | functional-776422  | e95f4e203e413 | 3.35kB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-776422 image ls --format table --alsologtostderr:
I0108 20:26:59.101270   27196 out.go:296] Setting OutFile to fd 1 ...
I0108 20:26:59.101412   27196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:26:59.101422   27196 out.go:309] Setting ErrFile to fd 2...
I0108 20:26:59.101427   27196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:26:59.101655   27196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
I0108 20:26:59.102373   27196 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:26:59.102496   27196 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:26:59.102895   27196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 20:26:59.102942   27196 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:26:59.117045   27196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
I0108 20:26:59.117568   27196 main.go:141] libmachine: () Calling .GetVersion
I0108 20:26:59.118117   27196 main.go:141] libmachine: Using API Version  1
I0108 20:26:59.118137   27196 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:26:59.118510   27196 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:26:59.118711   27196 main.go:141] libmachine: (functional-776422) Calling .GetState
I0108 20:26:59.120474   27196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 20:26:59.120513   27196 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:26:59.134397   27196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
I0108 20:26:59.134824   27196 main.go:141] libmachine: () Calling .GetVersion
I0108 20:26:59.135295   27196 main.go:141] libmachine: Using API Version  1
I0108 20:26:59.135319   27196 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:26:59.135618   27196 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:26:59.135784   27196 main.go:141] libmachine: (functional-776422) Calling .DriverName
I0108 20:26:59.135987   27196 ssh_runner.go:195] Run: systemctl --version
I0108 20:26:59.136007   27196 main.go:141] libmachine: (functional-776422) Calling .GetSSHHostname
I0108 20:26:59.138953   27196 main.go:141] libmachine: (functional-776422) DBG | domain functional-776422 has defined MAC address 52:54:00:fb:46:ab in network mk-functional-776422
I0108 20:26:59.139454   27196 main.go:141] libmachine: (functional-776422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:46:ab", ip: ""} in network mk-functional-776422: {Iface:virbr1 ExpiryTime:2024-01-08 21:23:15 +0000 UTC Type:0 Mac:52:54:00:fb:46:ab Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:functional-776422 Clientid:01:52:54:00:fb:46:ab}
I0108 20:26:59.139494   27196 main.go:141] libmachine: (functional-776422) DBG | domain functional-776422 has defined IP address 192.168.50.91 and MAC address 52:54:00:fb:46:ab in network mk-functional-776422
I0108 20:26:59.139626   27196 main.go:141] libmachine: (functional-776422) Calling .GetSSHPort
I0108 20:26:59.139927   27196 main.go:141] libmachine: (functional-776422) Calling .GetSSHKeyPath
I0108 20:26:59.140122   27196 main.go:141] libmachine: (functional-776422) Calling .GetSSHUsername
I0108 20:26:59.140256   27196 sshutil.go:53] new ssh client: &{IP:192.168.50.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/functional-776422/id_rsa Username:docker}
I0108 20:26:59.255250   27196 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 20:26:59.304658   27196 main.go:141] libmachine: Making call to close driver server
I0108 20:26:59.304673   27196 main.go:141] libmachine: (functional-776422) Calling .Close
I0108 20:26:59.304966   27196 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:26:59.304990   27196 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:26:59.305020   27196 main.go:141] libmachine: Making call to close driver server
I0108 20:26:59.305026   27196 main.go:141] libmachine: (functional-776422) DBG | Closing plugin on server side
I0108 20:26:59.305034   27196 main.go:141] libmachine: (functional-776422) Calling .Close
I0108 20:26:59.305277   27196 main.go:141] libmachine: (functional-776422) DBG | Closing plugin on server side
I0108 20:26:59.305284   27196 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:26:59.305316   27196 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-776422 image ls --format json --alsologtostderr:
[{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16
c8a275486b9","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026","docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisio
ner:v5"],"size":"31470524"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"e95f4e203e41399171fa4f1a6456865c7a2f132d964b879b19ef2eb8a1
dca13f","repoDigests":["localhost/minikube-local-cache-test@sha256:444734c3f7f8f3d3d58b861cd5f4a536f04c93e322279923cf5b37f10d62107c"],"repoTags":["localhost/minikube-local-cache-test:functional-776422"],"size":"3345"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-776422"],"size":"34114467"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45
d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0e
edc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256
:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-776422 image ls --format json --alsologtostderr:
I0108 20:26:58.537473   27124 out.go:296] Setting OutFile to fd 1 ...
I0108 20:26:58.537628   27124 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:26:58.537649   27124 out.go:309] Setting ErrFile to fd 2...
I0108 20:26:58.537659   27124 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:26:58.537920   27124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
I0108 20:26:58.538559   27124 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:26:58.538687   27124 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:26:58.539093   27124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 20:26:58.539152   27124 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:26:58.555544   27124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
I0108 20:26:58.556047   27124 main.go:141] libmachine: () Calling .GetVersion
I0108 20:26:58.556639   27124 main.go:141] libmachine: Using API Version  1
I0108 20:26:58.556659   27124 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:26:58.556982   27124 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:26:58.557174   27124 main.go:141] libmachine: (functional-776422) Calling .GetState
I0108 20:26:58.559268   27124 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 20:26:58.559328   27124 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:26:58.575097   27124 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
I0108 20:26:58.575716   27124 main.go:141] libmachine: () Calling .GetVersion
I0108 20:26:58.576765   27124 main.go:141] libmachine: Using API Version  1
I0108 20:26:58.576794   27124 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:26:58.577311   27124 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:26:58.577496   27124 main.go:141] libmachine: (functional-776422) Calling .DriverName
I0108 20:26:58.577756   27124 ssh_runner.go:195] Run: systemctl --version
I0108 20:26:58.577787   27124 main.go:141] libmachine: (functional-776422) Calling .GetSSHHostname
I0108 20:26:58.580974   27124 main.go:141] libmachine: (functional-776422) DBG | domain functional-776422 has defined MAC address 52:54:00:fb:46:ab in network mk-functional-776422
I0108 20:26:58.581402   27124 main.go:141] libmachine: (functional-776422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:46:ab", ip: ""} in network mk-functional-776422: {Iface:virbr1 ExpiryTime:2024-01-08 21:23:15 +0000 UTC Type:0 Mac:52:54:00:fb:46:ab Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:functional-776422 Clientid:01:52:54:00:fb:46:ab}
I0108 20:26:58.581442   27124 main.go:141] libmachine: (functional-776422) DBG | domain functional-776422 has defined IP address 192.168.50.91 and MAC address 52:54:00:fb:46:ab in network mk-functional-776422
I0108 20:26:58.581618   27124 main.go:141] libmachine: (functional-776422) Calling .GetSSHPort
I0108 20:26:58.581822   27124 main.go:141] libmachine: (functional-776422) Calling .GetSSHKeyPath
I0108 20:26:58.581985   27124 main.go:141] libmachine: (functional-776422) Calling .GetSSHUsername
I0108 20:26:58.582124   27124 sshutil.go:53] new ssh client: &{IP:192.168.50.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/functional-776422/id_rsa Username:docker}
I0108 20:26:58.717080   27124 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 20:26:58.844688   27124 main.go:141] libmachine: Making call to close driver server
I0108 20:26:58.844704   27124 main.go:141] libmachine: (functional-776422) Calling .Close
I0108 20:26:58.844930   27124 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:26:58.844951   27124 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:26:58.844969   27124 main.go:141] libmachine: Making call to close driver server
I0108 20:26:58.844978   27124 main.go:141] libmachine: (functional-776422) Calling .Close
I0108 20:26:58.845236   27124 main.go:141] libmachine: (functional-776422) DBG | Closing plugin on server side
I0108 20:26:58.845277   27124 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:26:58.845295   27124 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image ls --format yaml --alsologtostderr
E0108 20:26:58.352147   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-776422 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-776422
size: "34114467"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
- docker.io/library/nginx@sha256:9784f7985f6fba493ba30fb68419f50484fee8faaf677216cb95826f8491d2e9
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: e95f4e203e41399171fa4f1a6456865c7a2f132d964b879b19ef2eb8a1dca13f
repoDigests:
- localhost/minikube-local-cache-test@sha256:444734c3f7f8f3d3d58b861cd5f4a536f04c93e322279923cf5b37f10d62107c
repoTags:
- localhost/minikube-local-cache-test:functional-776422
size: "3345"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-776422 image ls --format yaml --alsologtostderr:
I0108 20:26:58.168142   27068 out.go:296] Setting OutFile to fd 1 ...
I0108 20:26:58.168281   27068 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:26:58.168293   27068 out.go:309] Setting ErrFile to fd 2...
I0108 20:26:58.168301   27068 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:26:58.168612   27068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
I0108 20:26:58.169413   27068 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:26:58.169570   27068 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:26:58.170157   27068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 20:26:58.170199   27068 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:26:58.185372   27068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
I0108 20:26:58.185823   27068 main.go:141] libmachine: () Calling .GetVersion
I0108 20:26:58.186435   27068 main.go:141] libmachine: Using API Version  1
I0108 20:26:58.186474   27068 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:26:58.186860   27068 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:26:58.187049   27068 main.go:141] libmachine: (functional-776422) Calling .GetState
I0108 20:26:58.189803   27068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 20:26:58.189838   27068 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:26:58.203621   27068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
I0108 20:26:58.204130   27068 main.go:141] libmachine: () Calling .GetVersion
I0108 20:26:58.204607   27068 main.go:141] libmachine: Using API Version  1
I0108 20:26:58.204635   27068 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:26:58.204952   27068 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:26:58.205101   27068 main.go:141] libmachine: (functional-776422) Calling .DriverName
I0108 20:26:58.205290   27068 ssh_runner.go:195] Run: systemctl --version
I0108 20:26:58.205317   27068 main.go:141] libmachine: (functional-776422) Calling .GetSSHHostname
I0108 20:26:58.208557   27068 main.go:141] libmachine: (functional-776422) DBG | domain functional-776422 has defined MAC address 52:54:00:fb:46:ab in network mk-functional-776422
I0108 20:26:58.208965   27068 main.go:141] libmachine: (functional-776422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:46:ab", ip: ""} in network mk-functional-776422: {Iface:virbr1 ExpiryTime:2024-01-08 21:23:15 +0000 UTC Type:0 Mac:52:54:00:fb:46:ab Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:functional-776422 Clientid:01:52:54:00:fb:46:ab}
I0108 20:26:58.208995   27068 main.go:141] libmachine: (functional-776422) DBG | domain functional-776422 has defined IP address 192.168.50.91 and MAC address 52:54:00:fb:46:ab in network mk-functional-776422
I0108 20:26:58.209168   27068 main.go:141] libmachine: (functional-776422) Calling .GetSSHPort
I0108 20:26:58.209319   27068 main.go:141] libmachine: (functional-776422) Calling .GetSSHKeyPath
I0108 20:26:58.209479   27068 main.go:141] libmachine: (functional-776422) Calling .GetSSHUsername
I0108 20:26:58.209597   27068 sshutil.go:53] new ssh client: &{IP:192.168.50.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/functional-776422/id_rsa Username:docker}
I0108 20:26:58.350929   27068 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 20:26:58.431936   27068 main.go:141] libmachine: Making call to close driver server
I0108 20:26:58.431949   27068 main.go:141] libmachine: (functional-776422) Calling .Close
I0108 20:26:58.432326   27068 main.go:141] libmachine: (functional-776422) DBG | Closing plugin on server side
I0108 20:26:58.432348   27068 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:26:58.432361   27068 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:26:58.432380   27068 main.go:141] libmachine: Making call to close driver server
I0108 20:26:58.432393   27068 main.go:141] libmachine: (functional-776422) Calling .Close
I0108 20:26:58.432674   27068 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:26:58.432690   27068 main.go:141] libmachine: (functional-776422) DBG | Closing plugin on server side
I0108 20:26:58.432698   27068 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-776422 ssh pgrep buildkitd: exit status 1 (308.948812ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image build -t localhost/my-image:functional-776422 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 image build -t localhost/my-image:functional-776422 testdata/build --alsologtostderr: (4.039381456s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-776422 image build -t localhost/my-image:functional-776422 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> abb2f7e5eb8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-776422
--> 9b27dcf5e33
Successfully tagged localhost/my-image:functional-776422
9b27dcf5e33ea88bacb2235f3ab4a55453eb1d284083eb5aed26dfa63458258c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-776422 image build -t localhost/my-image:functional-776422 testdata/build --alsologtostderr:
I0108 20:26:58.823236   27174 out.go:296] Setting OutFile to fd 1 ...
I0108 20:26:58.823465   27174 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:26:58.823479   27174 out.go:309] Setting ErrFile to fd 2...
I0108 20:26:58.823486   27174 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:26:58.823848   27174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
I0108 20:26:58.824845   27174 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:26:58.825617   27174 config.go:182] Loaded profile config "functional-776422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0108 20:26:58.826256   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 20:26:58.826343   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:26:58.841200   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44259
I0108 20:26:58.841682   27174 main.go:141] libmachine: () Calling .GetVersion
I0108 20:26:58.842327   27174 main.go:141] libmachine: Using API Version  1
I0108 20:26:58.842355   27174 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:26:58.842744   27174 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:26:58.842944   27174 main.go:141] libmachine: (functional-776422) Calling .GetState
I0108 20:26:58.845193   27174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0108 20:26:58.845242   27174 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:26:58.859896   27174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34181
I0108 20:26:58.860417   27174 main.go:141] libmachine: () Calling .GetVersion
I0108 20:26:58.860922   27174 main.go:141] libmachine: Using API Version  1
I0108 20:26:58.860954   27174 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:26:58.861320   27174 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:26:58.861543   27174 main.go:141] libmachine: (functional-776422) Calling .DriverName
I0108 20:26:58.861792   27174 ssh_runner.go:195] Run: systemctl --version
I0108 20:26:58.861822   27174 main.go:141] libmachine: (functional-776422) Calling .GetSSHHostname
I0108 20:26:58.865180   27174 main.go:141] libmachine: (functional-776422) DBG | domain functional-776422 has defined MAC address 52:54:00:fb:46:ab in network mk-functional-776422
I0108 20:26:58.865657   27174 main.go:141] libmachine: (functional-776422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:46:ab", ip: ""} in network mk-functional-776422: {Iface:virbr1 ExpiryTime:2024-01-08 21:23:15 +0000 UTC Type:0 Mac:52:54:00:fb:46:ab Iaid: IPaddr:192.168.50.91 Prefix:24 Hostname:functional-776422 Clientid:01:52:54:00:fb:46:ab}
I0108 20:26:58.865688   27174 main.go:141] libmachine: (functional-776422) DBG | domain functional-776422 has defined IP address 192.168.50.91 and MAC address 52:54:00:fb:46:ab in network mk-functional-776422
I0108 20:26:58.865836   27174 main.go:141] libmachine: (functional-776422) Calling .GetSSHPort
I0108 20:26:58.866044   27174 main.go:141] libmachine: (functional-776422) Calling .GetSSHKeyPath
I0108 20:26:58.866213   27174 main.go:141] libmachine: (functional-776422) Calling .GetSSHUsername
I0108 20:26:58.866349   27174 sshutil.go:53] new ssh client: &{IP:192.168.50.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/functional-776422/id_rsa Username:docker}
I0108 20:26:58.968198   27174 build_images.go:151] Building image from path: /tmp/build.3709967891.tar
I0108 20:26:58.968264   27174 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 20:26:58.979183   27174 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3709967891.tar
I0108 20:26:58.984086   27174 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3709967891.tar: stat -c "%s %y" /var/lib/minikube/build/build.3709967891.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3709967891.tar': No such file or directory
I0108 20:26:58.984134   27174 ssh_runner.go:362] scp /tmp/build.3709967891.tar --> /var/lib/minikube/build/build.3709967891.tar (3072 bytes)
I0108 20:26:59.051516   27174 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3709967891
I0108 20:26:59.065295   27174 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3709967891 -xf /var/lib/minikube/build/build.3709967891.tar
I0108 20:26:59.077494   27174 crio.go:297] Building image: /var/lib/minikube/build/build.3709967891
I0108 20:26:59.077585   27174 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-776422 /var/lib/minikube/build/build.3709967891 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0108 20:27:02.757076   27174 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-776422 /var/lib/minikube/build/build.3709967891 --cgroup-manager=cgroupfs: (3.679443781s)
I0108 20:27:02.757145   27174 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3709967891
I0108 20:27:02.769060   27174 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3709967891.tar
I0108 20:27:02.779867   27174 build_images.go:207] Built localhost/my-image:functional-776422 from /tmp/build.3709967891.tar
I0108 20:27:02.779901   27174 build_images.go:123] succeeded building to: functional-776422
I0108 20:27:02.779906   27174 build_images.go:124] failed building to: 
I0108 20:27:02.779953   27174 main.go:141] libmachine: Making call to close driver server
I0108 20:27:02.779962   27174 main.go:141] libmachine: (functional-776422) Calling .Close
I0108 20:27:02.780275   27174 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:27:02.780308   27174 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:27:02.780320   27174 main.go:141] libmachine: Making call to close driver server
I0108 20:27:02.780329   27174 main.go:141] libmachine: (functional-776422) Calling .Close
I0108 20:27:02.780534   27174 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:27:02.780549   27174 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:27:02.780553   27174 main.go:141] libmachine: (functional-776422) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.209711734s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-776422
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image load --daemon gcr.io/google-containers/addon-resizer:functional-776422 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 image load --daemon gcr.io/google-containers/addon-resizer:functional-776422 --alsologtostderr: (4.688573553s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image load --daemon gcr.io/google-containers/addon-resizer:functional-776422 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 image load --daemon gcr.io/google-containers/addon-resizer:functional-776422 --alsologtostderr: (5.594662798s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image ls
2024/01/08 20:26:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image save gcr.io/google-containers/addon-resizer:functional-776422 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 image save gcr.io/google-containers/addon-resizer:functional-776422 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.642830792s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image rm gcr.io/google-containers/addon-resizer:functional-776422 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.534719893s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-776422
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-776422 image save --daemon gcr.io/google-containers/addon-resizer:functional-776422 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-776422 image save --daemon gcr.io/google-containers/addon-resizer:functional-776422 --alsologtostderr: (1.330670217s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-776422
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-776422
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-776422
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-776422
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (123.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-056019 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0108 20:28:20.273142   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-056019 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (2m3.223603589s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (123.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056019 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-056019 addons enable ingress --alsologtostderr -v=5: (18.655793251s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.66s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056019 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (99.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-572131 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0108 20:32:26.440470   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:33:48.361374   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-572131 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.973306059s)
--- PASS: TestJSONOutput/start/Command (99.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-572131 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-572131 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-572131 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-572131 --output=json --user=testUser: (7.106600931s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-128864 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-128864 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.599181ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e1c0bddb-5edd-4d3a-9662-366352059bd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-128864] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad97577b-7a86-4d34-8fe1-c9e5043384fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17907"}}
	{"specversion":"1.0","id":"3a91b670-e5cd-4cf5-b593-e0eef64080bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c1f87fac-a2ba-4584-9050-6da3b62b7994","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig"}}
	{"specversion":"1.0","id":"5e4e1c59-9145-4bb3-a6e5-1abb8d905ff8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube"}}
	{"specversion":"1.0","id":"a238d23c-2033-42f9-a919-5eb77ee2c93f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7ee6e91d-9466-471e-91de-fd4ea1cd49b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3b320c98-409c-4354-8b94-d51f58843686","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-128864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-128864
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (103.48s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-352890 --driver=kvm2  --container-runtime=crio
E0108 20:34:26.819573   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:26.824870   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:26.835178   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:26.855540   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:26.895843   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:26.976184   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:27.136632   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:27.457295   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:28.098242   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:29.378775   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:31.939692   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:37.059936   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:34:47.301098   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-352890 --driver=kvm2  --container-runtime=crio: (48.052627503s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-355679 --driver=kvm2  --container-runtime=crio
E0108 20:35:07.781993   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:35:36.430159   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:35:48.742743   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-355679 --driver=kvm2  --container-runtime=crio: (52.49417182s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-352890
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-355679
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-355679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-355679
helpers_test.go:175: Cleaning up "first-352890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-352890
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-352890: (1.027777758s)
--- PASS: TestMinikubeProfile (103.48s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-340632 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0108 20:36:04.517347   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-340632 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.826346293s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-340632 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-340632 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-354814 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0108 20:36:32.201716   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-354814 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.159373289s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-354814 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-354814 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-340632 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-354814 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-354814 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-354814
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-354814: (1.221976005s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-354814
E0108 20:37:10.665891   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-354814: (24.657511319s)
--- PASS: TestMountStart/serial/RestartStopped (25.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-354814 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-354814 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (169.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-340815 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0108 20:39:26.819756   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 20:39:54.506298   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-340815 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m49.212216772s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (169.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-340815 -- rollout status deployment/busybox: (4.82948042s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-95tbd -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-npzdk -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-95tbd -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-npzdk -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-95tbd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-340815 -- exec busybox-5bc68d56bd-npzdk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-340815 -v 3 --alsologtostderr
E0108 20:40:36.429646   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:41:04.516862   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-340815 -v 3 --alsologtostderr: (44.939826331s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.56s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-340815 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp testdata/cp-test.txt multinode-340815:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp multinode-340815:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile686812324/001/cp-test_multinode-340815.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp multinode-340815:/home/docker/cp-test.txt multinode-340815-m02:/home/docker/cp-test_multinode-340815_multinode-340815-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m02 "sudo cat /home/docker/cp-test_multinode-340815_multinode-340815-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp multinode-340815:/home/docker/cp-test.txt multinode-340815-m03:/home/docker/cp-test_multinode-340815_multinode-340815-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m03 "sudo cat /home/docker/cp-test_multinode-340815_multinode-340815-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp testdata/cp-test.txt multinode-340815-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp multinode-340815-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile686812324/001/cp-test_multinode-340815-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp multinode-340815-m02:/home/docker/cp-test.txt multinode-340815:/home/docker/cp-test_multinode-340815-m02_multinode-340815.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815 "sudo cat /home/docker/cp-test_multinode-340815-m02_multinode-340815.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp multinode-340815-m02:/home/docker/cp-test.txt multinode-340815-m03:/home/docker/cp-test_multinode-340815-m02_multinode-340815-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m03 "sudo cat /home/docker/cp-test_multinode-340815-m02_multinode-340815-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp testdata/cp-test.txt multinode-340815-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp multinode-340815-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile686812324/001/cp-test_multinode-340815-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp multinode-340815-m03:/home/docker/cp-test.txt multinode-340815:/home/docker/cp-test_multinode-340815-m03_multinode-340815.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815 "sudo cat /home/docker/cp-test_multinode-340815-m03_multinode-340815.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 cp multinode-340815-m03:/home/docker/cp-test.txt multinode-340815-m02:/home/docker/cp-test_multinode-340815-m03_multinode-340815-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 ssh -n multinode-340815-m02 "sudo cat /home/docker/cp-test_multinode-340815-m03_multinode-340815-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-340815 node stop m03: (2.097452373s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-340815 status: exit status 7 (462.483845ms)

                                                
                                                
-- stdout --
	multinode-340815
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-340815-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-340815-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-340815 status --alsologtostderr: exit status 7 (454.084268ms)

                                                
                                                
-- stdout --
	multinode-340815
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-340815-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-340815-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:41:16.886817   34379 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:41:16.887117   34379 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:41:16.887130   34379 out.go:309] Setting ErrFile to fd 2...
	I0108 20:41:16.887136   34379 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:41:16.887350   34379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 20:41:16.887535   34379 out.go:303] Setting JSON to false
	I0108 20:41:16.887571   34379 mustload.go:65] Loading cluster: multinode-340815
	I0108 20:41:16.887619   34379 notify.go:220] Checking for updates...
	I0108 20:41:16.888013   34379 config.go:182] Loaded profile config "multinode-340815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0108 20:41:16.888027   34379 status.go:255] checking status of multinode-340815 ...
	I0108 20:41:16.888477   34379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:41:16.888531   34379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:41:16.907977   34379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34927
	I0108 20:41:16.908463   34379 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:41:16.909183   34379 main.go:141] libmachine: Using API Version  1
	I0108 20:41:16.909220   34379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:41:16.909596   34379 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:41:16.909820   34379 main.go:141] libmachine: (multinode-340815) Calling .GetState
	I0108 20:41:16.911530   34379 status.go:330] multinode-340815 host status = "Running" (err=<nil>)
	I0108 20:41:16.911547   34379 host.go:66] Checking if "multinode-340815" exists ...
	I0108 20:41:16.911857   34379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:41:16.911903   34379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:41:16.926257   34379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32797
	I0108 20:41:16.926683   34379 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:41:16.927133   34379 main.go:141] libmachine: Using API Version  1
	I0108 20:41:16.927155   34379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:41:16.927509   34379 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:41:16.927671   34379 main.go:141] libmachine: (multinode-340815) Calling .GetIP
	I0108 20:41:16.930154   34379 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:41:16.930544   34379 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:41:16.930582   34379 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:41:16.930631   34379 host.go:66] Checking if "multinode-340815" exists ...
	I0108 20:41:16.930934   34379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:41:16.930976   34379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:41:16.945519   34379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I0108 20:41:16.945960   34379 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:41:16.946454   34379 main.go:141] libmachine: Using API Version  1
	I0108 20:41:16.946481   34379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:41:16.946813   34379 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:41:16.946989   34379 main.go:141] libmachine: (multinode-340815) Calling .DriverName
	I0108 20:41:16.947170   34379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:41:16.947191   34379 main.go:141] libmachine: (multinode-340815) Calling .GetSSHHostname
	I0108 20:41:16.949869   34379 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:41:16.950314   34379 main.go:141] libmachine: (multinode-340815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:a0:1e", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:37:36 +0000 UTC Type:0 Mac:52:54:00:06:a0:1e Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:multinode-340815 Clientid:01:52:54:00:06:a0:1e}
	I0108 20:41:16.950341   34379 main.go:141] libmachine: (multinode-340815) DBG | domain multinode-340815 has defined IP address 192.168.39.196 and MAC address 52:54:00:06:a0:1e in network mk-multinode-340815
	I0108 20:41:16.950495   34379 main.go:141] libmachine: (multinode-340815) Calling .GetSSHPort
	I0108 20:41:16.950677   34379 main.go:141] libmachine: (multinode-340815) Calling .GetSSHKeyPath
	I0108 20:41:16.950868   34379 main.go:141] libmachine: (multinode-340815) Calling .GetSSHUsername
	I0108 20:41:16.951001   34379 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815/id_rsa Username:docker}
	I0108 20:41:17.041795   34379 ssh_runner.go:195] Run: systemctl --version
	I0108 20:41:17.048402   34379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:41:17.061821   34379 kubeconfig.go:92] found "multinode-340815" server: "https://192.168.39.196:8443"
	I0108 20:41:17.061846   34379 api_server.go:166] Checking apiserver status ...
	I0108 20:41:17.061875   34379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 20:41:17.074469   34379 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1082/cgroup
	I0108 20:41:17.084028   34379 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod5a9f4acc9b0ffa502cc0493a6d857b92/crio-f0d2d5342a010b354049254f307f86def47f9969d4181dee8e0a32622e57feea"
	I0108 20:41:17.084108   34379 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod5a9f4acc9b0ffa502cc0493a6d857b92/crio-f0d2d5342a010b354049254f307f86def47f9969d4181dee8e0a32622e57feea/freezer.state
	I0108 20:41:17.097790   34379 api_server.go:204] freezer state: "THAWED"
	I0108 20:41:17.097884   34379 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0108 20:41:17.103274   34379 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0108 20:41:17.103300   34379 status.go:421] multinode-340815 apiserver status = Running (err=<nil>)
	I0108 20:41:17.103316   34379 status.go:257] multinode-340815 status: &{Name:multinode-340815 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:41:17.103332   34379 status.go:255] checking status of multinode-340815-m02 ...
	I0108 20:41:17.103624   34379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:41:17.103655   34379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:41:17.118230   34379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45833
	I0108 20:41:17.118767   34379 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:41:17.119238   34379 main.go:141] libmachine: Using API Version  1
	I0108 20:41:17.119261   34379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:41:17.119632   34379 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:41:17.119852   34379 main.go:141] libmachine: (multinode-340815-m02) Calling .GetState
	I0108 20:41:17.121367   34379 status.go:330] multinode-340815-m02 host status = "Running" (err=<nil>)
	I0108 20:41:17.121386   34379 host.go:66] Checking if "multinode-340815-m02" exists ...
	I0108 20:41:17.121673   34379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:41:17.121706   34379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:41:17.135847   34379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0108 20:41:17.136257   34379 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:41:17.136729   34379 main.go:141] libmachine: Using API Version  1
	I0108 20:41:17.136749   34379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:41:17.137036   34379 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:41:17.137223   34379 main.go:141] libmachine: (multinode-340815-m02) Calling .GetIP
	I0108 20:41:17.139905   34379 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:41:17.140361   34379 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:41:17.140403   34379 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:41:17.140514   34379 host.go:66] Checking if "multinode-340815-m02" exists ...
	I0108 20:41:17.140837   34379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:41:17.140872   34379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:41:17.155500   34379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0108 20:41:17.155976   34379 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:41:17.156459   34379 main.go:141] libmachine: Using API Version  1
	I0108 20:41:17.156478   34379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:41:17.156764   34379 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:41:17.156977   34379 main.go:141] libmachine: (multinode-340815-m02) Calling .DriverName
	I0108 20:41:17.157174   34379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 20:41:17.157191   34379 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHHostname
	I0108 20:41:17.160080   34379 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:41:17.160582   34379 main.go:141] libmachine: (multinode-340815-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:58:8d", ip: ""} in network mk-multinode-340815: {Iface:virbr1 ExpiryTime:2024-01-08 21:38:43 +0000 UTC Type:0 Mac:52:54:00:85:58:8d Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-340815-m02 Clientid:01:52:54:00:85:58:8d}
	I0108 20:41:17.160610   34379 main.go:141] libmachine: (multinode-340815-m02) DBG | domain multinode-340815-m02 has defined IP address 192.168.39.78 and MAC address 52:54:00:85:58:8d in network mk-multinode-340815
	I0108 20:41:17.160783   34379 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHPort
	I0108 20:41:17.160957   34379 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHKeyPath
	I0108 20:41:17.161087   34379 main.go:141] libmachine: (multinode-340815-m02) Calling .GetSSHUsername
	I0108 20:41:17.161192   34379 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17907-10702/.minikube/machines/multinode-340815-m02/id_rsa Username:docker}
	I0108 20:41:17.251460   34379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 20:41:17.264698   34379 status.go:257] multinode-340815-m02 status: &{Name:multinode-340815-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 20:41:17.264729   34379 status.go:255] checking status of multinode-340815-m03 ...
	I0108 20:41:17.265029   34379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0108 20:41:17.265079   34379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:41:17.280813   34379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I0108 20:41:17.281266   34379 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:41:17.281733   34379 main.go:141] libmachine: Using API Version  1
	I0108 20:41:17.281751   34379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:41:17.282023   34379 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:41:17.282228   34379 main.go:141] libmachine: (multinode-340815-m03) Calling .GetState
	I0108 20:41:17.283798   34379 status.go:330] multinode-340815-m03 host status = "Stopped" (err=<nil>)
	I0108 20:41:17.283811   34379 status.go:343] host is not running, skipping remaining checks
	I0108 20:41:17.283815   34379 status.go:257] multinode-340815-m03 status: &{Name:multinode-340815-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-340815 node start m03 --alsologtostderr: (31.979089501s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-340815 node delete m03: (1.026401709s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (438.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-340815 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0108 20:56:04.517211   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 20:58:39.480362   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 20:59:26.820067   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 21:00:36.429619   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 21:01:04.517107   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-340815 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m17.952865935s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-340815 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (438.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-340815
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-340815-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-340815-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (81.779033ms)

                                                
                                                
-- stdout --
	* [multinode-340815-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-340815-m02' is duplicated with machine name 'multinode-340815-m02' in profile 'multinode-340815'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-340815-m03 --driver=kvm2  --container-runtime=crio
E0108 21:04:07.564030   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-340815-m03 --driver=kvm2  --container-runtime=crio: (51.024089251s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-340815
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-340815: exit status 80 (237.181915ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-340815
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-340815-m03 already exists in multinode-340815-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-340815-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.39s)

                                                
                                    
x
+
TestScheduledStopUnix (118.26s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-681440 --memory=2048 --driver=kvm2  --container-runtime=crio
E0108 21:10:36.429889   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-681440 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.480210175s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-681440 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-681440 -n scheduled-stop-681440
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-681440 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-681440 --cancel-scheduled
E0108 21:11:04.519282   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-681440 -n scheduled-stop-681440
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-681440
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-681440 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-681440
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-681440: exit status 7 (75.556801ms)

                                                
                                                
-- stdout --
	scheduled-stop-681440
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-681440 -n scheduled-stop-681440
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-681440 -n scheduled-stop-681440: exit status 7 (75.429549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-681440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-681440
--- PASS: TestScheduledStopUnix (118.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-862639 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0108 21:47:15.558149   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:48:39.483323   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 21:49:07.775985   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:07.781236   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:07.791562   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:07.811920   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:07.852990   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:07.933397   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:08.093913   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:08.414570   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:09.054806   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:10.335854   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:12.896386   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:18.016669   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:26.819959   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 21:49:28.257797   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:49:31.714285   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-862639 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m23.801170595s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-862639
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-862639: (7.131567143s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-862639 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-862639 status --format={{.Host}}: exit status 7 (87.79261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-862639 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-862639 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.345418551s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-862639 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-862639 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-862639 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (113.128969ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-862639] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-862639
	    minikube start -p kubernetes-upgrade-862639 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8626392 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-862639 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-862639 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-862639 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.264931196s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-862639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-862639
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-862639: (1.130849804s)
--- PASS: TestKubernetesUpgrade (344.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-626488 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-626488 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (97.592206ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-626488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (159.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-879273 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-879273 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m39.183645446s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (159.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (106.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-626488 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-626488 --driver=kvm2  --container-runtime=crio: (1m45.693286118s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-626488 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (106.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-626488 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-626488 --no-kubernetes --driver=kvm2  --container-runtime=crio: (6.32085968s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-626488 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-626488 status -o json: exit status 2 (252.442957ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-626488","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-626488
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-626488: (1.095614903s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-458620 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-458620 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (146.578146ms)

                                                
                                                
-- stdout --
	* [false-458620] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17907
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:13:41.952270   43554 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:13:41.952605   43554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:41.952616   43554 out.go:309] Setting ErrFile to fd 2...
	I0108 21:13:41.952623   43554 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:13:41.952955   43554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17907-10702/.minikube/bin
	I0108 21:13:41.953760   43554 out.go:303] Setting JSON to false
	I0108 21:13:41.955058   43554 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6946,"bootTime":1704741476,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:13:41.955149   43554 start.go:138] virtualization: kvm guest
	I0108 21:13:41.958303   43554 out.go:177] * [false-458620] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:13:41.960580   43554 notify.go:220] Checking for updates...
	I0108 21:13:41.960584   43554 out.go:177]   - MINIKUBE_LOCATION=17907
	I0108 21:13:41.962627   43554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:13:41.964326   43554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17907-10702/kubeconfig
	I0108 21:13:41.966222   43554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17907-10702/.minikube
	I0108 21:13:41.967821   43554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:13:41.969582   43554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:13:41.971629   43554 config.go:182] Loaded profile config "NoKubernetes-626488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0108 21:13:41.971784   43554 config.go:182] Loaded profile config "old-k8s-version-879273": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0108 21:13:41.971854   43554 config.go:182] Loaded profile config "running-upgrade-631345": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0108 21:13:41.971965   43554 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:13:42.013298   43554 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 21:13:42.014617   43554 start.go:298] selected driver: kvm2
	I0108 21:13:42.014647   43554 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:13:42.014665   43554 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:13:42.017317   43554 out.go:177] 
	W0108 21:13:42.018854   43554 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0108 21:13:42.020397   43554 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-458620 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-458620" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-458620" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:13:35 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.215:8443
name: NoKubernetes-626488
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:13:29 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.130:8443
name: old-k8s-version-879273
contexts:
- context:
cluster: NoKubernetes-626488
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:13:35 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-626488
name: NoKubernetes-626488
- context:
cluster: old-k8s-version-879273
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:13:29 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: old-k8s-version-879273
name: old-k8s-version-879273
current-context: NoKubernetes-626488
kind: Config
preferences: {}
users:
- name: NoKubernetes-626488
user:
client-certificate: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/NoKubernetes-626488/client.crt
client-key: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/NoKubernetes-626488/client.key
- name: old-k8s-version-879273
user:
client-certificate: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt
client-key: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-458620

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-458620"

                                                
                                                
----------------------- debugLogs end: false-458620 [took: 3.691266983s] --------------------------------
helpers_test.go:175: Cleaning up "false-458620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-458620
--- PASS: TestNetworkPlugins/group/false (4.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-626488 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-626488 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.424184172s)
--- PASS: TestNoKubernetes/serial/Start (30.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-626488 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-626488 "sudo systemctl is-active --quiet service kubelet": exit status 1 (246.61007ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-626488
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-626488: (1.340485048s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-626488 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-626488 --driver=kvm2  --container-runtime=crio: (44.493538032s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-879273 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a1a0116a-76bf-4438-8f5e-26265bebadd0] Pending
helpers_test.go:344: "busybox" [a1a0116a-76bf-4438-8f5e-26265bebadd0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a1a0116a-76bf-4438-8f5e-26265bebadd0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004854561s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-879273 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-879273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-879273 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-626488 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-626488 "sudo systemctl is-active --quiet service kubelet": exit status 1 (242.05911ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/Start (97.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-046839 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0108 21:16:04.517536   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-046839 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m37.666501279s)
--- PASS: TestPause/serial/Start (97.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (136.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-420119 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-420119 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m16.773489345s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (136.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (1010.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-879273 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-879273 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (16m49.994894913s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-879273 -n old-k8s-version-879273
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (1010.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-420119 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c6f82bc4-6ef4-4d5f-8122-f8598dc4c70c] Pending
helpers_test.go:344: "busybox" [c6f82bc4-6ef4-4d5f-8122-f8598dc4c70c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c6f82bc4-6ef4-4d5f-8122-f8598dc4c70c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004843204s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-420119 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-420119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-420119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025514395s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-420119 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (992.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-420119 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-420119 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (16m32.5260559s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-420119 -n no-preload-420119
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (992.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (342.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-930023 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-930023 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (5m42.217795026s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (342.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (379.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-690577 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 21:24:09.870572   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 21:24:26.819628   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/ingress-addon-legacy-056019/client.crt: no such file or directory
E0108 21:25:36.429222   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 21:26:04.518819   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-690577 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (6m19.095661848s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (379.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-930023 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3ccaabb4-5810-420a-af04-4ea75d328791] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3ccaabb4-5810-420a-af04-4ea75d328791] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.005243339s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-930023 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-930023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-930023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.376240251s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-930023 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-690577 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bc38e19f-713f-4e81-b7e0-b806ad8f0f19] Pending
helpers_test.go:344: "busybox" [bc38e19f-713f-4e81-b7e0-b806ad8f0f19] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bc38e19f-713f-4e81-b7e0-b806ad8f0f19] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004395887s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-690577 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-690577 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-690577 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.1240566s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-690577 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (618.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-930023 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 21:31:04.517412   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-930023 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m17.788510736s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-930023 -n embed-certs-930023
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (618.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (528.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-690577 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0108 21:31:59.482308   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-690577 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (8m48.304957493s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-690577 -n default-k8s-diff-port-690577
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (528.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-233407 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-233407 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (59.441531105s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-233407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-233407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.519445858s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (360.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-233407 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0108 21:45:36.429321   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/addons-117367/client.crt: no such file or directory
E0108 21:45:53.637935   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt: no such file or directory
E0108 21:46:04.517361   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-233407 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (6m0.340775906s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-233407 -n newest-cni-233407
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (360.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-233407 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-233407 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-233407 -n newest-cni-233407
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-233407 -n newest-cni-233407: exit status 2 (286.234484ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-233407 -n newest-cni-233407
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-233407 -n newest-cni-233407: exit status 2 (296.050772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-233407 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-233407 -n newest-cni-233407
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-233407 -n newest-cni-233407
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m43.216290448s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m27.487540108s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (124.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m4.897130892s)
--- PASS: TestNetworkPlugins/group/calico/Start (124.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6rwn2" [7a3aa2cd-2d1b-4680-8179-1964689ff028] Running
E0108 21:54:07.565283   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
E0108 21:54:07.775687   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/no-preload-420119/client.crt: no such file or directory
E0108 21:54:10.608901   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
E0108 21:54:10.614233   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
E0108 21:54:10.624547   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
E0108 21:54:10.644906   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
E0108 21:54:10.685291   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
E0108 21:54:10.765674   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
E0108 21:54:10.926585   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
E0108 21:54:11.247534   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
E0108 21:54:11.888161   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006844406s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-458620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-458620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-458620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c6spw" [364f73fe-2a52-455f-bf0a-53ed93ff7fc5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c6spw" [364f73fe-2a52-455f-bf0a-53ed93ff7fc5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005246784s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-458620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0108 21:54:13.168767   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-t7qjh" [ede97765-a813-4e3d-9953-1ccccc0ecff3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:54:15.729679   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-t7qjh" [ede97765-a813-4e3d-9953-1ccccc0ecff3] Running
E0108 21:54:20.850722   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006338169s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-458620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-458620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m25.169154164s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (128.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0108 21:54:51.572986   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m8.514142332s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (128.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2tzrb" [fbbadee6-74ca-447d-8afb-23d38bfee2e3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005890612s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-458620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-458620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s9rfz" [76321c23-d3e8-4f0e-b1bb-d0cda0886afa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s9rfz" [76321c23-d3e8-4f0e-b1bb-d0cda0886afa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.044566605s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-458620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (133.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0108 21:56:04.516986   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/functional-776422/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m13.599511008s)
--- PASS: TestNetworkPlugins/group/flannel/Start (133.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-458620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-458620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jd27m" [205303ab-8083-46b8-978c-ec7e957a1e33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jd27m" [205303ab-8083-46b8-978c-ec7e957a1e33] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005116416s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-458620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-716145
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (128.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-458620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m8.898591489s)
--- PASS: TestNetworkPlugins/group/bridge/Start (128.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-458620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-458620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-d4nxj" [e772d9ba-24a5-4d16-b01c-850b66ca097c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:56:54.454504   17896 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/default-k8s-diff-port-690577/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-d4nxj" [e772d9ba-24a5-4d16-b01c-850b66ca097c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005557634s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-458620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-w8mhm" [8c4796a2-5b19-4bac-824f-eebfb0a2a627] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004312513s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-458620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-458620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tgcjk" [31909142-a36c-4ee0-8a0e-b6e9ee01fddd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tgcjk" [31909142-a36c-4ee0-8a0e-b6e9ee01fddd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004628341s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-458620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-458620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-458620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-87tp5" [03a16624-6a79-482f-b1c7-a32d0e5451ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-87tp5" [03a16624-6a79-482f-b1c7-a32d0e5451ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005356917s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-458620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-458620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (39/298)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
52 TestDockerFlags 0
55 TestDockerEnvContainerd 0
57 TestHyperKitDriverInstallOrUpdate 0
58 TestHyperkitDriverSkipUpgrade 0
109 TestFunctional/parallel/DockerEnv 0
110 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
158 TestGvisorAddon 0
159 TestImageBuild 0
192 TestKicCustomNetwork 0
193 TestKicExistingNetwork 0
194 TestKicCustomSubnet 0
195 TestKicStaticIP 0
227 TestChangeNoneUser 0
230 TestScheduledStopWindows 0
232 TestSkaffold 0
234 TestInsufficientStorage 0
238 TestMissingContainerUpgrade 0
246 TestStartStop/group/disable-driver-mounts 0.15
253 TestNetworkPlugins/group/kubenet 4.08
262 TestNetworkPlugins/group/cilium 4.4
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-216454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-216454
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-458620 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-458620" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-458620" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:13:35 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.215:8443
name: NoKubernetes-626488
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:13:29 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.130:8443
name: old-k8s-version-879273
contexts:
- context:
cluster: NoKubernetes-626488
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:13:35 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-626488
name: NoKubernetes-626488
- context:
cluster: old-k8s-version-879273
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:13:29 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: old-k8s-version-879273
name: old-k8s-version-879273
current-context: NoKubernetes-626488
kind: Config
preferences: {}
users:
- name: NoKubernetes-626488
user:
client-certificate: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/NoKubernetes-626488/client.crt
client-key: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/NoKubernetes-626488/client.key
- name: old-k8s-version-879273
user:
client-certificate: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt
client-key: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-458620

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-458620"

                                                
                                                
----------------------- debugLogs end: kubenet-458620 [took: 3.885035954s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-458620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-458620
--- SKIP: TestNetworkPlugins/group/kubenet (4.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-458620 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-458620" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17907-10702/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:13:29 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.130:8443
name: old-k8s-version-879273
contexts:
- context:
cluster: old-k8s-version-879273
extensions:
- extension:
last-update: Mon, 08 Jan 2024 21:13:29 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: old-k8s-version-879273
name: old-k8s-version-879273
current-context: ""
kind: Config
preferences: {}
users:
- name: old-k8s-version-879273
user:
client-certificate: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.crt
client-key: /home/jenkins/minikube-integration/17907-10702/.minikube/profiles/old-k8s-version-879273/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-458620

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-458620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-458620"

                                                
                                                
----------------------- debugLogs end: cilium-458620 [took: 4.19056196s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-458620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-458620
--- SKIP: TestNetworkPlugins/group/cilium (4.40s)

                                                
                                    
Copied to clipboard